EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Life Insurance in the United States through World War I

Sharon Ann Murphy

The first American life insurance enterprises can be traced back to the late colonial period. The Presbyterian Synods in Philadelphia and New York set up the Corporation for Relief of Poor and Distressed Widows and Children of Presbyterian Ministers in 1759; the Episcopalian ministers organized a similar fund in 1769. In the half century from 1787 to 1837, twenty-six companies offering life insurance to the general public opened their doors, but they rarely survived more than a couple of years and sold few policies [Figures 1 and 2]. The only early companies to experience much success in this line of business were the Pennsylvania Company for Insurances on Lives and Granting Annuities (chartered 1812), the Massachusetts Hospital Life Insurance Company (1818), the Baltimore Life Insurance Company (1830), the New York Life Insurance and Trust Company (1830), and the Girard Life Insurance, Annuity and Trust Company of Pennsylvania (1836). [See Table 1.]

Despite this tentative start, the life insurance industry did make some significant strides beginning in the 1830s [Figure 2]. Life insurance in force (the total death benefit payable on all existing policies) grew steadily from about $600,000 in 1830 to just under $5 million a decade later, with New York Life and Trust policies accounting for more than half of this latter amount. Over the next five years insurance in force almost tripled to $14.5 million before surging by 1850 to just under $100 million of life insurance spread among 48 companies. The top three companies – the Mutual Life Insurance Company of New York (1842), the Mutual Benefit Life Insurance Company of New Jersey (1845), and the Connecticut Mutual Life Insurance Company (1846) – accounted for more than half of this amount. The sudden success of life insurance during the 1840s can be attributed to two main developments – changes in legislation impacting life insurance and a shift in the corporate structure of companies towards mutualization.

Married Women’s Acts

Life insurance companies targeted women and children as the main beneficiaries of insurance, despite the fact that the majority of women were prevented by law from gaining the protection offered in the unfortunate event of their husband’s death. The first problem was that companies strictly adhered to the common law idea of insurable interest which required that any person taking out insurance on the life of another have a specific monetary interest in that person’s continued life; “affection” (i.e. the relationship of husband and wife or parent and child) was not considered adequate evidence of insurable interest. Additionally, married women could not enter into contracts on their own and therefore could not take out life insurance policies either on themselves (for the benefit of their children or husband) or directly on their husbands (for their own benefit). One way around this problem was for the husband to take out the policy on his own life and assign his wife or children as the beneficiaries. This arrangement proved to be flawed, however, since the policy was considered part of the husband’s estate and therefore could be claimed by any creditors of the insured.

New York’s 1840 Law

This dilemma did not pass unnoticed by promoters of life insurance who viewed it as one of the main stumbling blocks to the growth of the industry. The New York Life and Trust stood at the forefront of a campaign to pass a state law enabling women to procure life insurance policies protected from the claims of creditors. The law, which passed the New York state legislature on April 1, 1840, accomplished four important tasks. First, it established the right of a woman to enter into a contract of insurance on the life of her husband “by herself and in her name, or in the name of any third person, with his assent, as her trustee.” Second, that insurance would be “free from the claims of the representatives of her husband, or of any of his creditors” unless the annual premiums on the policy exceeded $300 (approximately the premium required to take out the maximum $10,000 policy on the life of a 40 year old). Third, in the event of the wife predeceasing the husband, the policy reverted to the children who were granted the same protection from creditors. Finally, as the law was interpreted by both companies and the courts, wives were not required to prove their monetary interest in the life of the insured, establishing for the first time an instance of insurable interest independent of pecuniary interest in the life of another.

By December of 1840, Maryland had enacted an identical law – copied word for word from the New York statute. The Massachusetts legislation of 1844 went one step further by protecting from the claims of creditors all policies procured “for the benefit of a married woman, whether effected by her, her husband, or any other person.” The 1851 New Jersey law was the most stringent, limiting annual premiums to only $100. In those states where a general law did not exist, new companies often had the New York law inserted into their charter, with these provisions being upheld by the state courts. For example, the Connecticut Mutual Life Insurance Company (1846), the North Carolina Mutual Life Insurance Company (1849), and the Jefferson Life Insurance Company of Cincinnati, Ohio (1850) all provided this protection in their charters despite the silence of their respective states on the issue.

Mutuality

The second important development of the 1840s was the emergence of mutual life insurance companies in which any annual profits were redistributed to the policyholders rather than to stockholders. Although mutual insurance was not a new concept – the Society for Equitable Assurances on Lives and Survivorships of London had been operating under the mutual plan since its establishment in 1762 and American marine and fire companies were commonly organized as mutuals – the first American mutual life companies did not begin issuing policies until the early 1840s. The main impetus for this shift to mutualization was the panic of 1837 and the resulting financial crisis, which combined to dampen the enthusiasm of investors for projects ranging from canals and railroads to banks and insurance companies. Between 1838 and 1846, only one life insurance company was able to raise the capital essential for organization on a stock basis. On the other hand, mutuals required little initial capital, relying instead on the premium payments from high-volume sales to pay any death claims. The New England Mutual Life Insurance Company (1835) issued its first policy in 1844 and the Mutual Life Insurance Company of New York (1842) began operation in 1843; at least fifteen more mutuals were chartered by 1849.

Aggressive Marketing

In order to achieve the necessary sales volume, mutual companies began to aggressively promote life insurance through advertisements, editorials, pamphlets, and soliciting agents. These marketing tactics broke with the traditionally staid practices of banks and insurance companies whereby advertisements generally had provided only the location of the local office and agents passively had accepted applications from customers who inquired directly at their office.

Advantages of Mutuality

The mutual marketing campaigns not only advanced life insurance in general but mutuality in particular, which held widespread appeal for the public at large. Policyholders who could not afford to own stock in a proprietary insurance company could now share in the financial success of the mutual companies, with any annual profits (the excess of invested premium income over death payments) being redistributed to the policyholders, often in the form of reduced premium payments. The rapid success of life insurance during the late 1840s, as seen in Figure 3, thus can be attributed both to this active marketing as well as to the appeal of mutual insurance itself.

Regulation and Stagnation after 1849

While many of these companies operated on a sound financial basis, the ease of formation opened the field to several fraudulent or fiscally unsound companies. Stock institutions, concerned both for the reputation of life insurance in general as well as with self-preservation, lobbied the New York state legislature for a law to limit the operation of mutual companies. On April 10, 1849 the legislature passed a law requiring all new insurance companies either incorporating or planning to do business in New York to possess $100,000 of capital stock. Two years later, the legislature passed a more stringent law obligating all life insurance companies to deposit $100,000 with the Comptroller of New York. While this capital requirement was readily met by most stock companies and by the more established New York-based mutual companies, it effectively dampened the movement toward mutualization until the 1890s. Additionally, twelve out-of-state companies ceased doing business in New York altogether, leaving only the New England Mutual and the Mutual Benefit of New Jersey to compete with the New York companies in one of the largest markets. These laws were also largely responsible for the decade-long stagnation in insurance sales beginning in 1849 [Figure 3].

The Civil War and Its Aftermath

By the end of the 1850s life insurance sales again began to increase, climbing to almost $200 million by 1862 before tripling to just under $600 million by the end of the Civil War; life insurance in force peaked at $2 billion in 1871 [Figures 3 and 4]. Several factors contributed to this renewed success. First, the establishment of insurance departments in Massachusetts (1856) and New York (1859) to oversee the operation of fire, marine, and life insurance companies stimulated public confidence in the financial soundness of the industry. Additionally, in 1861 the Massachusetts legislature passed a non-forfeiture law, which forbade companies from terminating policies for lack of premium payment. Instead, the law stipulated that policies be converted to term life policies and that companies pay any death claims that occurred during this term period [term policies are issued only for a stipulated number of years, require reapplication on a regular basis, and consequently command significantly lower annual premiums which rise rapidly with age]. This law was further strengthened in 1880 when Massachusetts mandated that policyholders have the additional option of receiving a cash surrender value for a forfeited policy.

The Civil War was another factor in this resurgence. Although the industry had no experience with mortality during war – particularly a war on American soil – and most policies contained clauses that voided them in the case of military service, several major companies decided to ensure war risks for an additional premium rate of 2% to 5%. While most companies just about broke even on these soldiers’ policies, the goodwill and publicity engendered with the payment of each death claim combined with a generally heightened awareness of mortality to greatly increase interest in life insurance. In the immediate postbellum period, investment in most industries increased dramatically and life insurance was no exception. Whereas only 43 companies existed on the eve of the war, the newfound popularity of life insurance resulted in the establishment of 107 companies between 1865 and 1870 [Figure 1].

Tontines

The other major innovation in life insurance occurred in 1867 when the Equitable Life Assurance Society (1859) began issuing tontine or deferred dividend policies. While a portion of each premium payment went directly towards an ordinary insurance policy, another portion was deposited in an investment fund with a set maturity date (usually 10, 15, or 20 years) and a restricted group of participants. The beneficiaries of deceased policyholders received only the face value of the standard life component while participants who allowed their policy to lapse either received nothing or only a small cash surrender value. At the end of the stipulated period, the dividends that had accumulated in the fund were divided among the remaining participants. Agents often promoted these policies with inflated estimates of future returns – and always assured the potential investor that he would be a beneficiary of the high lapse rate and not one of the lapsing participants. Estimates indicate that approximately two-thirds of all life insurance policies in force in 1905 – at the height of the industry’s power – were deferred dividend plans.

Reorganization and Innovation

The success and profitability of life insurance companies bred stiff competition during the 1860s; the resulting market saturation and a general economic downtown combined to push the industry into a severe depression during the 1870s. While the more well-established companies such as the Mutual Life Insurance Company of New York, the New York Life Insurance Company (1843), and the Equitable Life Assurance Society were strong enough to weather the depression with few problems, most of the new corporations organized during the 1860s were unable to survive the downturn. All told, 98 life insurance companies went out of business between 1868 and 1877, with 46 ceasing operations during the depression years of 1871 to 1874 [Figure 1]. Of these, 32 failed outright, resulting in $35 million of losses for policyholders. It was 1888 before the amount of insurance in force surpassed that of its peak in 1870 [Figure 4].

Assessment and Fraternal Insurance Companies

Taking advantage of these problems within the industry were numerous assessment and fraternal benefit societies. Assessment or cooperative companies, as they were sometimes called, were associations in which each member was assessed a flat fee to provide the death benefit when another member died rather than paying an annual premium. The two main problems with these organizations were the uncertain number of assessments each year and the difficulty of maintaining membership levels. As members aged and death rates rose, the assessment societies found it difficult to recruit younger members willing to take on the increasing risks of assessments. By the turn of the century, most assessment companies had collapsed or reorganized as mutual companies.

Fraternal organizations were voluntary associations of people affiliated through ethnicity, religion, profession, or some other tie. Although fraternal societies had existed throughout the history of the United States, it was only in the postbellum era that they mushroomed in number and emerged as a major provider of life insurance, mainly for working-class Americans. While many fraternal societies initially issued insurance on an assessment basis, most soon switched to mutual insurance. By the turn of the century, the approximately 600 fraternal societies in existence provided over $5 billion in life insurance to their members, making them direct competitors of the major stock and mutual companies. Just 5 years later, membership was over 6 million with $8 billion of insurance in force [Figure 4].

Industrial Life Insurance

For the few successful life insurance companies organized during the 1860s and 1870s, innovation was the only means of avoiding failure. Aware that they could not compete with the major companies in a tight market, these emerging companies concentrated on markets previously ignored by the larger life insurance organizations – looking instead to the example of the fraternal benefit societies. Beginning in the mid-1870s, companies such as the John Hancock Company (1862), the Metropolitan Life Insurance Company (1868), and the Prudential Insurance Company of America (1875) started issuing industrial life insurance. Industrial insurance, which began in England in the late 1840s, targeted lower income families by providing policies in amounts as small as $100, as opposed to the thousands of dollars normally required for ordinary insurance. Premiums ranging from $0.05 to $0.65 were collected on a weekly basis, often by agents coming door-to-door, instead of on an annual, semi-annual, or quarterly basis by direct remittance to the company. Additionally, medical examinations were often not required and policies could be written to cover all members of the family instead of just the main breadwinner. While the number of policies written skyrocketed to over 51 million by 1919, industrial insurance remained only a fraction of the amount of life insurance in force throughout the period [Figures 4 and 5].

International Expansion

The major life insurance companies also quickly expanded into the global market. While numerous firms ventured abroad as early as the 1860s and 1870s, the most rapid international growth occurred between 1885 and 1905. By 1900, the Equitable was providing insurance in almost 100 nations and territories, the New York Life in almost 50 and the Mutual in about 20. The international premium income (excluding Canada) of these Big Three life insurance companies amounted to almost $50 million in 1905, covering over $1 billion of insurance in force.

The Armstrong Committee Investigation

In response to a multitude of newspaper articles portraying extravagant spending and political payoffs by executives at the Equitable Life Assurance Society – all at the expense of their policyholders – Superintendent Francis Hendricks of the New York Insurance Department reluctantly conducted an investigation of the company in 1905. His report substantiated these allegations and prompted the New York legislature to create a special committee, known as the Armstrong Committee, to examine the conduct of all life insurance companies operating within the state. Appointed chief counsel of the investigation was future United States Supreme Court Chief Justice Charles Evans Hughes. Among the abuses uncovered by the committee were interlocking directorates, the creation of subsidiary financial institutions to evade restrictions on investments, the use of proxy voting to frustrate policyholder control of mutuals, unlimited company expenses, tremendous spending for lobbying activities, rebating (the practice of returning to a new client a portion of their first premium payment as an incentive to take out a policy), the encouragement of policy lapses, and the condoning of “twisting” (a practice whereby agents misrepresented and libeled rival firms in order to convince a policyholder to sacrifice their existing policy and replace it with one from that agent). Additionally, the committee severely chastised the New York Insurance Department for permitting such malpractice to occur and recommended the enactment of a wide array of reform measures. These revelations induced numerous other states to conduct their own investigations, including New Jersey, Massachusetts, Ohio, Missouri, Wisconsin, Tennessee, Kentucky, Minnesota, and Nebraska.

New Regulations

In 1907, the New York legislature responded to the committee’s report by issuing a series of strict regulations specifying acceptable investments, limiting lobbying practices and campaign contributions, democratizing management through the elimination of proxy voting, standardizing policy forms, and limiting agent activities including rebating and twisting. Most devastating to the industry, however, were the prohibition of deferred dividend policies and the requirement of regular dividend payments to policyholders. Nineteen other states followed New York’s lead in adopting similar legislation but the dominance of New York in the insurance industry enabled it to assert considerable influence over a large percentage of the industry. The state invoked the Appleton Rule, a 1901 administrative rule devised by New York Deputy Superintendent of Insurance Henry D. Appleton that required life insurance companies to comply with New York legislation both in New York and in all other states in which they conducted business, as a condition of doing business in New York. As the Massachusetts insurance commissioner immediately recognized, “In a certain sense [New York’s] supervision will be a national supervision, as its companies do business in all the states.” The rule was officially incorporated into New York’s insurance laws in 1939 and remained both in effect and highly effective until the 1970s.

Continued Growth in the Early Twentieth Century

The Armstrong hearings and the ensuing legislation renewed public confidence in the safety of life insurance, resulting in a surge of new company organizations not seen since the 1860s. Whereas only 106 companies existed in 1904, another 288 were established in the ten years from 1905 to 1914 [Figure 1]. Life insurance in force likewise rose rapidly, increasing from $20 billion on the eve of the hearings to almost $46 billion by the end of World War I, with the share insured by the fraternal and assessment societies decreasing from 40% to less than a quarter [Figure 5].

Group Insurance

One major innovation to occur during these decades was the development of group insurance. In 1911 the Equitable Life Assurance Society wrote a policy covering the 125 employees of the Pantasote Leather Company, requiring neither individual applications nor medical examinations. The following year, the Equitable organized a group department to promote this new product and soon was insuring the employees of Montgomery Ward Company. By 1919, 29 companies wrote group policies, which amounted to over a half billion dollars worth of life insurance in force.

War Risk Insurance

Not included in Figure 5 is the War Risk insurance issued by the United States government during World War I. Beginning in April 1917, all active military personnel received a $4,500 insurance policy payable by the federal government in the case of death or disability. In October of the same year, the government began selling low-cost term life and disability insurance, without medical examination, to all active members of the military. War Risk insurance proved to be extremely popular during the war, reaching over $40 billion of life insurance in force by 1919. In the aftermath of the war, these term policies quickly declined to under $3 billion of life insurance in force, with many servicemen turning instead to the whole life policies offered by the stock and mutual companies. As was the case after the Civil War, life insurance sales rose dramatically after World War I, peaking at $117 billion of insurance in force in 1930. By the eve of the Great Depression there existed over 120 million life insurance policies – approximately equivalent to one policy for every man, woman, and child living in the United States at that time.

(Sharon Ann Murphy is a Ph.D. Candidate at the Corcoran Department of History, University of Virginia.)

References and Further Reading

Buley, R. Carlyle. The American Life Convention, 1906-1952: A Study in the History of Life Insurance. New York: Appleton-Century-Crofts, Inc., 1953.

Grant, H. Roger. Insurance Reform: Consumer Action in the Progressive Era. Ames, Iowa: Iowa State University Press, 1988.

Keller, Morton. The Life Insurance Enterprise, 1885-1910: A Study in the Limits of Corporate Power. Cambridge, MA: Belknap Press, 1963.

Kimball, Spencer L. Insurance and Public Policy: A Study in the Legal Implications of Social and Economic Public Policy, Based on Wisconsin Records 1835-1959. Madison, WI: University of Wisconsin Press, 1960.

Merkel, Philip L. “Going National: The Life Insurance Industry’s Campaign for Federal Regulation after the Civil War.” Business History Review 65 (Autumn 1991): 528-553.

North, Douglass. “Capital Accumulation in Life Insurance between the Civil War and the Investigation of 1905.” In Men in Business: Essays on the Historical Role of the Entrepreneur, edited by William Miller, 238-253. New York: Harper & Row Publishers, 1952.

Ransom, Roger L., and Richard Sutch. “Tontine Insurance and the Armstrong Investigation: A Case of Stifled Innovation, 1868-1905.” Journal of Economic History 47, no. 2 (June 1987): 379-390.

Stalson, J. Owen. Marketing Life Insurance: Its History in America. Cambridge, MA: Harvard University Press, 1942.

Table 1

Early American Life Insurance Companies, 1759-1844

Company Year Chartered Terminated Insurance in Force in 1840
Corp. for the Relief of Poor and Distressed Widows and Children of Presbyterian Ministers (Presbyterian Ministers Fund) 1759
Corporation for the Relief of the Widows and Children of Clergymen in the Communion of the Church of England in America (Episcopal Ministers Fund) 1769
Insurance Company of the State of Pennsylvania 1794 1798
Insurance Company of North America, PA 1794 1798
United Insurance Company, NY 1798 1802
New York Insurance Company 1798 1802
Pennsylvania Company for Insurances on Lives and Granting Annuities 1812 1872* 691,000
New York Mechanics Life & Fire 1812 1813
Dutchess County Fire, Marine & Life, NY 1814 1818
Massachusetts Hospital Life Insurance Company 1818 1867* 342,000
Union Insurance Company, NY 1818 1840
Aetna Insurance Company (mainly fire insurance; separate life company chartered in 1853) 1820 1853
Farmers Loan & Trust Company, NY 1822 1843
Baltimore Life Insurance Company 1830 1867 750,000 (est.)
New York Life Insurance & Trust Company 1830 1865* 2,880,000
Lawrenceburg Insurance Company 1832 1836
Mississippi Insurance Company 1833 1837
Protection Insurance Company, Mississippi 1833 1837
Ohio Life Ins. & Trust Co. (life policies appear to have been reinsured with New York Life & Trust in the late 1840s) 1834 1857 54,000
New England Mutual Life Insurance Company, Massachusetts (did not begin issuing policies until 1844) 1835 0
Ocean Mutual, Louisiana 1835 1839
Southern Life & Trust, Alabama 1836 1840
American Life Insurance & Trust Company, Baltimore 1836 1840
Girard Life Insurance, Annuity & Trust Company, Pennsylvania 1836 1894 723,000
Missouri Life & Trust 1837 1841
Missouri Mutual 1837 1841
Globe Life Insurance, Trust & Annuity Company, Pennsylvania 1837 1857
Odd Fellow Life Insurance and Trust Company, Pennsylvania 1840 1857
National of Pennsylvania 1841 1852
Mutual Life Insurance Company of New York 1842
New York Life Insurance Company 1843
State Mutual Life Assurance Company, Massachusetts 1844

*Date company ceased writing life insurance.

Citation: Murphy, Sharon. “Life Insurance in the United States through World War I”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2002. URL http://eh.net/encyclopedia/life-insurance-in-the-united-states-through-world-war-i/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work’: Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance

Volume

Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).

Regulation

The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice
Sugar
Lumber
Rice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to www.cftc.gov and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-history-of-futures-trading-in-the-united-states/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

The Depression of 1893

David O. Whitten, Auburn University

The Depression of 1893 was one of the worst in American history with the unemployment rate exceeding ten percent for half a decade. This article describes economic developments in the decades leading up to the depression; the performance of the economy during the 1890s; domestic and international causes of the depression; and political and social responses to the depression.

The Depression of 1893 can be seen as a watershed event in American history. It was accompanied by violent strikes, the climax of the Populist and free silver political crusades, the creation of a new political balance, the continuing transformation of the country’s economy, major changes in national policy, and far-reaching social and intellectual developments. Business contraction shaped the decade that ushered out the nineteenth century.

Unemployment Estimates

One way to measure the severity of the depression is to examine the unemployment rate. Table 1 provides estimates of unemployment, which are derived from data on output — annual unemployment was not directly measured until 1929, so there is no consensus on the precise magnitude of the unemployment rate of the 1890s. Despite the differences in the two series, however, it is obvious that the Depression of 1893 was an important event. The unemployment rate exceeded ten percent for five or six consecutive years. The only other time this occurred in the history of the US economy was during the Great Depression of the 1930s.

Timing and Depth of the Depression

The National Bureau of Economic Research estimates that the economic contraction began in January 1893 and continued until June 1894. The economy then grew until December 1895, but it was then hit by a second recession that lasted until June 1897. Estimates of annual real gross national product (which adjust for this period’s deflation) are fairly crude, but they generally suggest that real GNP fell about 4% from 1892 to 1893 and another 6% from 1893 to 1894. By 1895 the economy had grown past its earlier peak, but GDP fell about 2.5% from 1895 to 1896. During this period population grew at about 2% per year, so real GNP per person didn’t surpass its 1892 level until 1899. Immigration, which had averaged over 500,000 people per year in the 1880s and which would surpass one million people per year in the first decade of the 1900s, averaged only 270,000 from 1894 to 1898.

Table 1
Estimates of Unemployment during the 1890s

Year Lebergott Romer
1890 4.0% 4.0%
1891 5.4 4.8
1892 3.0 3.7
1893 11.7 8.1
1894 18.4 12.3
1895 13.7 11.1
1896 14.5 12.0
1897 14.5 12.4
1898 12.4 11.6
1899 6.5 8,7
1900 5.0 5.0

Source: Romer, 1984

The depression struck an economy that was more like the economy of 1993 than that of 1793. By 1890, the US economy generated one of the highest levels of output per person in the world — below that in Britain, but higher than the rest of Europe. Agriculture no longer dominated the economy, producing only about 19 percent of GNP, well below the 30 percent produced in manufacturing and mining. Agriculture’s share of the labor force, which had been about 74% in 1800, and 60% in 1860, had fallen to roughly 40% in 1890. As Table 2 shows, only the South remained a predominantly agricultural region. Throughout the country few families were self-sufficient, most relied on selling their output or labor in the market — unlike those living in the country one hundred years earlier.

Table 2
Agriculture’s Share of the Labor Force by Region, 1890

Northeast 15%
Middle Atlantic 17%
Midwest 43%
South Atlantic 63%
South Central 67%
West 29%

Economic Trends Preceding the 1890s

Between 1870 and 1890 the number of farms in the United States rose by nearly 80 percent, to 4.5 million, and increased by another 25 percent by the end of the century. Farm property value grew by 75 percent, to $16.5 billion, and by 1900 had increased by another 25 percent. The advancing checkerboard of tilled fields in the nation’s heartland represented a vast indebtedness. Nationwide about 29% of farmers were encumbered by mortgages. One contemporary observer estimated 2.3 million farm mortgages nationwide in 1890 worth over $2.2 billion. But farmers in the plains were much more likely to be in debt. Kansas croplands were mortgaged to 45 percent of their true value, those in South Dakota to 46 percent, in Minnesota to 44, in Montana 41, and in Colorado 34 percent. Debt covered a comparable proportion of all farmlands in those states. Under favorable conditions the millions of dollars of annual charges on farm mortgages could be borne, but a declining economy brought foreclosures and tax sales.

Railroads opened new areas to agriculture, linking these to rapidly changing national and international markets. Mechanization, the development of improved crops, and the introduction of new techniques increased productivity and fueled a rapid expansion of farming operations. The output of staples skyrocketed. Yields of wheat, corn, and cotton doubled between 1870 and 1890 though the nation’s population rose by only two-thirds. Grain and fiber flooded the domestic market. Moreover, competition in world markets was fierce: Egypt and India emerged as rival sources of cotton; other areas poured out a growing stream of cereals. Farmers in the United States read the disappointing results in falling prices. Over 1870-73, corn and wheat averaged $0.463 and $1.174 per bushel and cotton $0.152 per pound; twenty years later they brought but $0.412 and $0.707 a bushel and $0.078 a pound. In 1889 corn fell to ten cents in Kansas, about half the estimated cost of production. Some farmers in need of cash to meet debts tried to increase income by increasing output of crops whose overproduction had already demoralized prices and cut farm receipts.

Railroad construction was an important spur to economic growth. Expansion peaked between 1879 and 1883, when eight thousand miles a year, on average, were built including the Southern Pacific, Northern Pacific and Santa Fe. An even higher peak was reached in the late 1880s, and the roads provided important markets for lumber, coal, iron, steel, and rolling stock.

The post-Civil War generation saw an enormous growth of manufacturing. Industrial output rose by some 296 percent, reaching in 1890 a value of almost $9.4 billion. In that year the nation’s 350,000 industrial firms employed nearly 4,750,000 workers. Iron and steel paced the progress of manufacturing. Farm and forest continued to provide raw materials for such established enterprises as cotton textiles, food, and lumber production. Heralding the machine age, however, was the growing importance of extractives — raw materials for a lengthening list of consumer goods and for producing and fueling locomotives, railroad cars, industrial machinery and equipment, farm implements, and electrical equipment for commerce and industry. The swift expansion and diversification of manufacturing allowed a growing independence from European imports and was reflected in the prominence of new goods among US exports. Already the value of American manufactures was more than half the value of European manufactures and twice that of Britain.

Onset and Causes of the Depression

The depression, which was signaled by a financial panic in 1893, has been blamed on the deflation dating back to the Civil War, the gold standard and monetary policy, underconsumption (the economy was producing goods and services at a higher rate than society was consuming and the resulting inventory accumulation led firms to reduce employment and cut back production), a general economic unsoundness (a reference less to tangible economic difficulties and more to a feeling that the economy was not running properly), and government extravagance .

Economic indicators signaling an 1893 business recession in the United States were largely obscured. The economy had improved during the previous year. Business failures had declined, and the average liabilities of failed firms had fallen by 40 percent. The country’s position in international commerce was improved. During the late nineteenth century, the United States had a negative net balance of payments. Passenger and cargo fares paid to foreign ships that carried most American overseas commerce, insurance charges, tourists’ expenditures abroad, and returns to foreign investors ordinarily more than offset the effect of a positive merchandise balance. In 1892, however, improved agricultural exports had reduced the previous year’s net negative balance from $89 million to $20 million. Moreover, output of non-agricultural consumer goods had risen by more than 5 percent, and business firms were believed to have an ample backlog of unfilled orders as 1893 opened. The number checks cleared between banks in the nation at large and outside New York, factory employment, wholesale prices, and railroad freight ton mileage advanced through the early months of the new year.

Yet several monthly series of indicators showed that business was falling off. Building construction had peaked in April 1892, later moving irregularly downward, probably in reaction to over building. The decline continued until the turn of the century, when construction volume finally turned up again. Weakness in building was transmitted to the rest of the economy, dampening general activity through restricted investment opportunities and curtailed demand for construction materials. Meanwhile, a similar uneven downward drift in business activity after spring 1892 was evident from a composite index of cotton takings (cotton turned into yarn, cloth, etc.) and raw silk consumption, rubber imports, tin and tin plate imports, pig iron manufactures, bituminous and anthracite coal production, crude oil output, railroad freight ton mileage, and foreign trade volume. Pig iron production had crested in February, followed by stock prices and business incorporations six months later.

The economy exhibited other weaknesses as the March 1893 date for Grover Cleveland’s inauguration to the presidency drew near. One of the most serious was in agriculture. Storm, drought, and overproduction during the preceding half-dozen years had reversed the remarkable agricultural prosperity and expansion of the early 1880s in the wheat, corn, and cotton belts. Wheat prices tumbled twenty cents per bushel in 1892. Corn held steady, but at a low figure and on a fall of one-eighth in output. Twice as great a decline in production dealt a severe blow to the hopes of cotton growers: the season’s short crop canceled gains anticipated from a recovery of one cent in prices to 8.3 cents per pound, close to the average level of recent years. Midwestern and Southern farming regions seethed with discontent as growers watched staple prices fall by as much as two-thirds after 1870 and all farm prices by two-fifths; meanwhile, the general wholesale index fell by one-fourth. The situation was grave for many. Farmers’ terms of trade had worsened, and dollar debts willingly incurred in good times to permit agricultural expansion were becoming unbearable burdens. Debt payments and low prices restricted agrarian purchasing power and demand for goods and services. Significantly, both output and consumption of farm equipment began to fall as early as 1891, marking a decline in agricultural investment. Moreover, foreclosure of farm mortgages reduced the ability of mortgage companies, banks, and other lenders to convert their earning assets into cash because the willingness of investors to buy mortgage paper was reduced by the declining expectation that they would yield a positive return.

Slowing investment in railroads was an additional deflationary influence. Railroad expansion had long been a potent engine of economic growth, ranging from 15 to 20 percent of total national investment in the 1870s and 1880s. Construction was a rough index of railroad investment. The amount of new track laid yearly peaked at 12,984 miles in 1887, after which it fell off steeply. Capital outlays rose through 1891 to provide needed additions to plant and equipment, but the rate of growth could not be sustained. Unsatisfactory earnings and a low return for investors indicated the system was over built and overcapitalized, and reports of mismanagement were common. In 1892, only 44 percent of rail shares outstanding returned dividends, although twice that proportion of bonds paid interest. In the meantime, the completion of trunk lines dried up local capital sources. Political antagonism toward railroads, spurred by the roads’ immense size and power and by real and imagined discrimination against small shippers, made the industry less attractive to investors. Declining growth reduced investment opportunity even as rail securities became less appealing. Capital outlays fell in 1892 despite easy credit during much of the year. The markets for ancillary industries, like iron and steel, felt the impact of falling railroad investment as well; at times in the 1880s rails had accounted for 90 percent of the country’s rolled steel output. In an industry whose expansion had long played a vital role in creating new markets for suppliers, lagging capital expenditures loomed large in the onset of depression.

European Influences

European depression was a further source of weakness as 1893 began. Recession struck France in 1889, and business slackened in Germany and England the following year. Contemporaries dated the English downturn from a financial panic in November. Monetary stringency was a base cause of economic hard times. Because specie — gold and silver — was regarded as the only real money, and paper money was available in multiples of the specie supply, when people viewed the future with doubt they stockpiled specie and rejected paper. The availability of specie was limited, so the longer hard times prevailed the more difficult it was for anyone to secure hard money. In addition to monetary stringency, the collapse of extensive speculations in Australian, South African, and Argentine properties; and a sharp break in securities prices marked the advent of severe contraction. The great banking house of Baring and Brothers, caught with excessive holdings of Argentine securities in a falling market, shocked the financial world by suspending business on November 20, 1890. Within a year of the crisis, commercial stagnation had settled over most of Europe. The contraction was severe and long-lived. In England many indices fell to 80 percent of capacity; wholesale prices overall declined nearly 6 percent in two years and had declined 15 percent by 1894. An index of the prices of principal industrial products declined by almost as much. In Germany, contraction lasted three times as long as the average for the period 1879-1902. Not until mid-1895 did Europe begin to revive. Full prosperity returned a year or more later.

Panic in the United Kingdom and falling trade in Europe brought serious repercussions in the United States. The immediate result was near panic in New York City, the nation’s financial center, as British investors sold their American stocks to obtain funds. Uneasiness spread through the country, fostered by falling stock prices, monetary stringency, and an increase in business failures. Liabilities of failed firms during the last quarter of 1890 were $90 million — twice those in the preceding quarter. Only the normal year’s end grain exports, destined largely for England, averted a gold outflow.

Circumstances moderated during the early months of 1891, although gold flowed to Europe, and business failures remained high. Credit eased, if slowly: in response to pleas for relief, the federal treasury began the premature redemption of government bonds to put additional money into circulation, and the end of the harvest trade reduced demand for credit. Commerce quickened in the spring. Perhaps anticipation of brisk trade during the harvest season stimulated the revival of investment and business; in any event, the harvest of 1891 buoyed the economy. A bumper American wheat crop coincided with poor yields in Europe increase exports and the inflow of specie: US exports in fiscal 1892 were $150 million greater than in the preceding year, a full 1 percent of gross national product. The improved market for American crops was primarily responsible for a brief cycle of prosperity in the United States that Europe did not share. Business thrived until signs of recession began to appear in late 1892 and early 1893.

The business revival of 1891-92 only delayed an inevitable reckoning. While domestic factors led in precipitating a major downturn in the United States, the European contraction operated as a powerful depressant. Commercial stagnation in Europe decisively affected the flow of foreign investment funds to the United States. Although foreign investment in this country and American investment abroad rose overall during the 1890s, changing business conditions forced American funds going abroad and foreign funds flowing into the United States to reverse as Americans sold off foreign holdings and foreigners sold off their holdings of American assets. Initially, contraction abroad forced European investors to sell substantial holdings of American securities, then the rate of new foreign investment fell off. The repatriation of American securities prompted gold exports, deflating the money stock and depressing prices. A reduced inflow of foreign capital slowed expansion and may have exacerbated the declining growth of the railroads; undoubtedly, it dampened aggregate demand.

As foreign investors sold their holdings of American stocks for hard money, specie left the United States. Funds secured through foreign investment in domestic enterprise were important in helping the country meet its usual balance of payments deficit. Fewer funds invested during the 1890s was one of the factors that, with a continued negative balance of payments, forced the United States to export gold almost continuously from 1892 to 1896. The impact of depression abroad on the flow of capital to this country can be inferred from the history of new capital issues in Britain, the source of perhaps 75 percent of overseas investment in the United States. British issues varied as shown in Table 3.

Table 3
British New Capital Issues, 1890-1898 (millions of pounds, sterling)

1890 142.6
1891 104.6
1892 81.1
1893 49.1
1894 91.8
1895 104.7
1896 152.8
1897 157.3
1898 150.2

Source: Hoffmann, p. 193

Simultaneously, the share of new British investment sent abroad fell from one-fourth in 1891 to one-fifth two years later. Over that same period, British net capital flows abroad declined by about 60 percent; not until 1896 and 1897 did they resume earlier levels.

Thus, the recession that began in 1893 had deep roots. The slowdown in railroad expansion, decline in building construction, and foreign depression had reduced investment opportunities, and, following the brief upturn effected by the bumper wheat crop of 1891, agricultural prices fell as did exports and commerce in general. By the end of 1893, business failures numbering 15,242 averaging $22,751 in liabilities, had been reported. Plagued by successive contractions of credit, many essentially sound firms failed which would have survived under ordinary circumstances. Liabilities totaled a staggering $357 million. This was the crisis of 1893.

Response to the Depression

The financial crises of 1893 accelerated the recession that was evident early in the year into a major contraction that spread throughout the economy. Investment, commerce, prices, employment, and wages remained depressed for several years. Changing circumstances and expectations, and a persistent federal deficit, subjected the treasury gold reserve to intense pressure and generated sharp counterflows of gold. The treasury was driven four times between 1894 and 1896 to resort to bond issues totaling $260 million to obtain specie to augment the reserve. Meanwhile, restricted investment, income, and profits spelled low consumption, widespread suffering, and occasionally explosive labor and political struggles. An extensive but incomplete revival occurred in 1895. The Democratic nomination of William Jennings Bryan for the presidency on a free silver platform the following year amid an upsurge of silverite support contributed to a second downturn peculiar to the United States. Europe, just beginning to emerge from depression, was unaffected. Only in mid-1897 did recovery begin in this country; full prosperity returned gradually over the ensuing year and more.

The economy that emerged from the depression differed profoundly from that of 1893. Consolidation and the influence of investment bankers were more advanced. The nation’s international trade position was more advantageous: huge merchandise exports assured a positive net balance of payments despite large tourist expenditures abroad, foreign investments in the United States, and a continued reliance on foreign shipping to carry most of America’s overseas commerce. Moreover, new industries were rapidly moving to ascendancy, and manufactures were coming to replace farm produce as the staple products and exports of the country. The era revealed the outlines of an emerging industrial-urban economic order that portended great changes for the United States.

Hard times intensified social sensitivity to a wide range of problems accompanying industrialization, by making them more severe. Those whom depression struck hardest as well as much of the general public and major Protestant churches, shored up their civic consciousness about currency and banking reform, regulation of business in the public interest, and labor relations. Although nineteenth century liberalism and the tradition of administrative nihilism that it favored remained viable, public opinion began to slowly swing toward governmental activism and interventionism associated with modern, industrial societies, erecting in the process the intellectual foundation for the reform impulse that was to be called Progressivism in twentieth century America. Most important of all, these opposed tendencies in thought set the boundaries within which Americans for the next century debated the most vital questions of their shared experience. The depression was a reminder of business slumps, commonweal above avarice, and principle above principal.

Government responses to depression during the 1890s exhibited elements of complexity, confusion, and contradiction. Yet they also showed a pattern that confirmed the transitional character of the era and clarified the role of the business crisis in the emergence of modern America. Hard times, intimately related to developments issuing in an industrial economy characterized by increasingly vast business units and concentrations of financial and productive power, were a major influence on society, thought, politics, and thus, unavoidably, government. Awareness of, and proposals of means for adapting to, deep-rooted changes attending industrialization, urbanization, and other dimensions of the current transformation of the United States long antedated the economic contraction of the nineties.

Selected Bibliography

*I would like to thank Douglas Steeples, retired dean of the College of Liberal Arts and professor of history, emeritus, Mercer University. Much of this article has been taken from Democracy in Desperation: The Depression of 1893 by Douglas Steeples and David O. Whitten, which was declared an Exceptional Academic Title by Choice. Democracy in Desperation includes the most recent and extensive bibliography for the depression of 1893.

Clanton, Gene. Populism: The Humane Preference in America, 1890-1900. Boston: Twayne, 1991.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodwyn, Lawrence. Democratic Promise: The Populist Movement in America. New York: Oxford University Press, 1976.

Grant, H. Roger. Self Help in the 1890s Depression. Ames: Iowa State University Press, 1983.

Higgs, Robert. The Transformation of the American Economy, 1865-1914. New York: Wiley, 1971.

Himmelberg, Robert F. The Rise of Big Business and the Beginnings of Antitrust and Railroad Regulation, 1870-1900. New York: Garland, 1994.

Hoffmann, Charles. The Depression of the Nineties: An Economic History. Westport, CT: Greenwood Publishing, 1970.

Jones, Stanley L. The Presidential Election of 1896. Madison: University of Wisconsin Press, 1964.

Kindleberger, Charles Poor. Manias, Panics, and Crashes: A History of Financial Crises. Revised Edition. New York: Basic Books, 1989.

Kolko, Gabriel. Railroads and Regulation, 1877-1916. Princeton: Princeton University Press, 1965.

Lamoreaux, Naomi R. The Great Merger Movement in American Business, 1895-1904. New York: Cambridge University Press, 1985.

Rees, Albert. Real Wages in Manufacturing, 1890-1914. Princeton, NJ: Princeton University Press, 1961.

Ritter, Gretchen. Goldbugs and Greenbacks: The Antimonopoly Tradition and the Politics of Finance in America. New York: Cambridge University Press, 1997.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94, no. 1. (1986): 1-37.

Schwantes, Carlos A. Coxey’s Army: An American Odyssey. Lincoln: University of Nebraska Press, 1985.

Steeples, Douglas, and David Whitten. Democracy in Desperation: The Depression of 1893. Westport, CT: Greenwood Press, 1998.

Timberlake, Richard. “Panic of 1893.” In Business Cycles and Depressions: An Encyclopedia, edited by David Glasner. New York: Garland, 1997.

White, Gerald Taylor. Years of Transition: The United States and the Problems of Recovery after 1893. University, AL: University of Alabama Press, 1982.

Citation: Whitten, David. “Depression of 1893”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-depression-of-1893/

The United States Public Debt, 1861 to 1975

Franklin Noll, Ph.D.

Introduction

On January 1, 1790, the United States’ public debt stood at $52,788,722.03 (Bayley 31). It consisted of the debt of the Continental Congress and $191,608.81 borrowed by Secretary of the Treasury Alexander Hamilton in the spring of 1789 from New York banks to meet the new government’s first payroll (Bayley 108). Since then the public debt has passed by a number of historical milestones: the assumption of Revolutionary War debt in August 1790, the redemption of the debt in 1835, the financing innovations rising from Civil War in 1861, the introduction of war loan drives in 1917, the rise of deficit spending after 1932, the lasting expansion of the debt from World War II, and the passage of the Budget Control Act in 1975. (The late 1990s may mark another point of significance in the history of the public debt, but it is still too soon to tell.) This short study examines the public debt between the Civil War and the Budget Control Act, the period in which the foundations of our present public debt of over $7 trillion were laid. (See Figure 1.) We start our investigation by asking, “What exactly is the public debt?”

Source: Nominal figures from “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63 and Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahrhome.htm. Real figures adjust for inflation. These figures and conversion factors provided by Robert Sahr.

Definitions

Throughout its history, the Treasury has recognized various categories of government debt. The oldest category and the largest in size is the public debt. The public debt, simply put, is all debt for which the government of the United States is wholly liable. In turn, the general public is ultimately responsible for such debt through taxation. Some authors use the terms federal debt and national debt interchangeably with public debt. From the view of the United States Treasury, this is incorrect.

Federal debt, as defined by the Treasury, is the public debt plus debt issued by government-sponsored agencies for their own use. The term first appears in 1973 when it is officially defined as including “the obligations issued by Federal Government agencies which are part of the unified budget totals and in which there is an element of Federal ownership, along with the marketable and nonmarketable obligations of the Department of the Treasury” (Annual Report of the Secretary of the Treasury, 1973: 13). Put more succinctly, federal debt is made up of the public debt plus contingent debt. The government is partially or, more precisely, contingently liable for the debt of government-sponsored enterprises for which it has pledged its guarantee. On the contingency that a government-sponsored enterprise such as the Government National Mortgage Association ever defaults on its debt, the United States government becomes liable for the debt.

National debt, though a popular term and used by Alexander Hamilton, has never been technically defined by the Treasury. The term suggests that one is referring to all debt for which the government could be liable–wholly or in part. During the period 1861 to 1975, the debt for which the government could be partially or contingently liable has included that of government-sponsored enterprises, railroads, insular possessions (Puerto Rico and the Philippines), and the District of Columbia. Taken together, these categories of debt could be considered the true national debt which, to my knowledge, has never been calculated.

Structure

But it is the public debt–only that debt for which the government is wholly liable–which has been totaled and mathematically examined in a myriad of ways by scholars and pundits. Yet, very few have broken down the public debt into its component parts of marketable and nonmarketable debt instruments: those securities, such as bills, bonds, and notes that make up the basis of the debt. In a simplified form, the structure of the public debt is as follows:

  • Interest-bearing debt
    • Marketable debt
      • Treasuries
    • Nonmarketable debt
      • Depositary Series
    • Foreign Government Series
    • Government Account Series
    • Investment Series
    • REA Series
    • SLG Series
    • US Savings Securities
  • Matured debt
  • Debt bearing no interest

Though the elements of the debt varied over time, this basic structure remained constant from 1861 to 1975 and into the present. As we investigate further the elements making up the structure of the public debt, we will focus on information from 1975, the last year of our study. By doing so, we can see the debt at its largest and most complex for the period 1861 to 1975 and in a structure most like that currently held by the public debt. It was also in 1975 that the Bureau of the Public Debt’s accounting and reporting of the public debt took on its present form.

Some Financial Terms

Bearer Security
A bearer security is one in which ownership is determined solely by possession or the bearer of the security.
Callable
The term callable refers to whether and under what conditions the government has the right to redeem a debt issue prior to its maturity date. The date at which a security can be called by the government for redemption is known as its call date.
Coupon
A coupon is a detachable part of a security that bears the interest payment date and the amount due. The bearer of the security detaches the appropriate coupon and presents it to the Treasury for payment. Coupon is synonymous with interest in financial parlance: the coupon rate refers to the interest rate.
Coupon Security
A coupon security is any security that has attached coupons, and usually refers to a bearer security.
Discount
The term discount refers to the sale of a debt instrument at a price below its face or par value.
Liquidity
A security is liquid if it can be easily bought and sold in the secondary market or easily converted to cash.
Maturity
The maturity of a security is the date at which it becomes payable in full.
Negotiable
A negotiable security is one that can be freely sold or transferred to another holder.
Par
Par is the nominal dollar amount assigned to a security by the government. It is the security’s face value.
Premium
The term premium refers to the sale of a debt instrument at a price above its face or par value.
Registered Security
A registered security is one in which the owner of the security is recorded by the Bureau of the Public Debt. Usually both the principal and interest are registered, making them non-negotiable or non-transferable.

Interest-Bearing Debt, Matured Debt, and Debt Bearing No Interest

This major division in the structure of the public debt is fairly self-explanatory. Interest-bearing debt contains all securities that carry an obligation on the part of the government to pay interest to the security’s owner on a regular basis. These debt instruments have not reached maturity. Almost all of the public debt falls into the interest-bearing debt category. (See Figure 2.) Securities that are past maturity (and therefore no longer paying interest), but have not yet been redeemed by their holders are located within the category of matured debt. This is an extremely small part of the total public debt. In the category of debt bearing no interest are securities that are non-negotiable and non-interest-bearing such as Special Notes of the United States issued to the International Monetary Fund. Securities in this category are often issued for one-time or extraordinary purposes. Also in the category are obsolete forms of currency such as fractional currency, legal tender notes, and silver certificates. In total, old currency made up only .114% of the public debt in 1975. The Federal Reserve Notes which have been issued since 1914 and which we deal with on a daily basis are obligations of the Federal Reserve and thus not part of the public debt.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 62-63.

During the period under study, the value of outstanding matured debt generally grew with the overall size of the debt, except for a spike in the amount of unredeemed securities in the mid and late 1950s. (See Figure 3.) This was caused by the maturation of United States Savings Bonds bought during World War II. Many of these war bonds lay forgotten in people’s safe-deposit boxes for years. Wartime purchases of Defense Savings Stamps and War Savings Stamps account for much of the sudden increase in debt bearing no interest from 1943 to 1947. (See Figure 4.) The year 1947 saw the United States issuing non-interest paying notes to fund the establishment of the International Monetary Fund and the International Bank for Reconstruction and Development (part of the World Bank). As interest-bearing debt makes up over 99% of the public debt, it is basically equivalent to it. (See Figure 5.) And, the history of the overall public debt will be examined later.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 62-63.

Marketable Debt and Nonmarketable Debt

Interest-bearing debt is divided between marketable debt and nonmarketable debt. Marketable debt consists of securities that can be easily bought and sold in the secondary market. The Treasury has used the term since World War II to describe issues that are available to the general public in registered or bearer form without any condition of sale. Nonmarketable debt refers to securities that cannot be bought and sold in the secondary market though there are rare exceptions. Generally, nonmarketable government securities may only be bought from or sold to the Treasury. They are issued in registered form only and/or can be bought only by government agencies, specific business enterprises, or individuals under strict conditions.

The growth of the marketable debt largely mirrors that of total interest-bearing debt; and until 1918, there was no such thing as nonmarketable debt. (See Figure 6.) Nonmarketable debt arose in fiscal year 1918, when securities were sold to the Federal Reserve in an emergency move to raise money as the United States entered World War I. This was the first sale of “special issue” securities as nonmarketable debt securities were classified prior to World War II. Special or nonmarketable issues continued through the interwar period and grew with the establishment of government programs. Such securities were sometimes issued by the Treasury in the name of a government fund or program and were then bought by the Treasury. In effect, the Treasury extended a loan to the government entity. More often the Treasury would sell a special security to the government fund or program for cash, creating a loan to the Treasury and an investment vehicle for the government entity. And, as the number of government programs grew and the size of government funds (like those associated with Social Security) expanded, so did the number and value of nonmarketable securities–greatly contributing to the rapid growth of nonmarketable debt. By 1975, these intragovernment securities combined with United States Savings Bonds helped make nonmarketable debt 40% of the total public debt. (See Figure 7.)

Source: The following were used to calculate outstanding marketable debt: Data for 1861 to 1880 derived from Rafael A. Bayley, The National Loans of the United States from July 4, 1776, to June 30, 1880, second edition, facs rpt (New York: Burt Franklin, 1970 [1881]), 180-84 and Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1861), 44. Post-1880 numbers derived from “Analysis of the Principal of the Interest-Bearing Public Debt of the United States from July 1, 1856 to July 1, 1912,” idem (1912), 102-03; “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71. The marketable debt figures were then subtracted from total outstanding interest bearing debt to obtain nonmarketable figures.

Source: “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

Marketable Debt Securities: Treasuries

The general public is most familiar with those marketable debt instruments falling within the category of Treasury securities, more popularly known as simply Treasuries. These securities can be bought by anyone and have active secondary markets. The most commonly issued Treasuries between 1861 and 1975 are the following, listed in order of length of time to maturity, shortest to longest:

Treasury certificate of indebtedness
A couponed, short-term, interest-bearing security. It can have a maturity of as little as one day or as long as five years. Maturity is usually between 3 and 12 months. These securities were largely replaced by Treasury bills.
Treasury bill
A short-term security issued on a discount basis rather than at par. The price is determined by competitive bidding at auction. They have a maturity of a year or less and are usually sold on a weekly basis with maturities of 13 weeks and 26 weeks. They were first issued in December 1929.
Treasury note
A couponed, interest-bearing security that generally matures in 2 to 5 years. In 1968, the Treasury began to issue 7-year notes, and in 1976, the maximum maturity of Treasury notes was raised to 10 years.
Treasury bond
A couponed interest-bearing security that normally matures after 10 or more years.

The story of these securities between 1861 and 1975 is one of a general movement by the Treasury to issue ever more securities in the shorter maturities–certificates of indebtedness, bills, and notes. Until World War I, the security of preference was the bond with a call date before maturity. (See Figure 8.) Such an instrument provided the minimum attainable interest rate for the Treasury and was in demand as a long-term investment vehicle by investors. The pre-maturity call date allowed the Treasury the flexibility to redeem the bonds during a period of surplus revenue. Between 1861 and 1917, certificates of indebtedness were issued on occasion to manage cash flow through the Treasury and notes were issued only during the financial crisis years of the Civil War.

Source: Franklin Noll, A Guide to Government Obligations, 1861-1976, unpublished ms., 2004.

In terms of both numbers and values, the change to shorter maturity Treasury securities began with World War I. Unprepared for the financial demands of World War I, the Treasury was perennially short of cash and issued a great number of certificates of indebtedness and short-term notes. A market developed for these securities, and they were issued throughout the interwar period to meet cash demands and refund the remaining World War I debt. While the number of bonds issued rose in the World War I and World War II years, by 1975 bond issues had become rare; and by the late 1960s, the value of bonds issued was in steep decline. (See Figure 9.) In part, this was the effect of interest rates moving beyond statutory limits set on the interest rate the Treasury could pay on long-term securities. The primary reason for the decline of the bond, however, was post-World War II economic growth and inflation that drove up interest rates and established expectations of rising inflation. In such conditions, shorter term securities were more in favor with investors who sought to ride the rising tide of interest rates and keep their financial assets as liquid as possible. Correspondingly, the number and value of notes and bills rose throughout the postwar years. Certificates of indebtedness declined as they were replaced by bills. Treasury bills won out because they were easier and therefore less expensive for the Treasury to issue than certificates of indebtedness. Bills required no predetermination of interest rates or servicing of coupon payments.

Source: Data for 1861 to 1880 derived from Rafael A. Bayley, The National Loans of the United States from July 4, 1776, to June 30, 1880, second edition, facs rpt (New York: Burt Franklin, 1970 [1881]), 180-84 and Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1861), 44. Post-1880 numbers derived from “Analysis of the Principal of the Interest-Bearing Public Debt of the United States from July 1, 1856 to July 1, 1912,” idem (1912), 102-03; “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

Nonmarketable Debt Securities

Securities sold as nonmarketable debt come in the forms above–certificate of indebtedness, bill, note, and bond. Most, but not all, nonmarketable securities fall into these series or categories:

Depositary Series
Made up of depositary bonds held by depositary banks. These are banks that provide banking facilities for the Treasury. Depositary bonds act as collateral for the Treasury funds deposited at the bank. The interest on these collateral securities provides the banks with income for the services rendered.
Foreign Government Series
The group of Treasury securities sold to foreign governments or used in foreign exchange stabilization operations.
Government Account Series
Refers to all types of securities issued to or by government accounts and trust funds.
Investment Series
Contains Treasury Bond, Investment Series securities sold to institutional investors.
REA Series
Rural electrification Administration Series securities are sold to recipients of Rural Electrification Administration loans who have unplanned excess loan money. Holding on to excess funds in the form of bonds give the borrower the capacity to cash in the bonds and retrieve the unused loan funds without the need for negotiating a new loan.
SLG Series
State and Local Government Series securities were first issued in 1972 to help state and municipal governments meet federal arbitrage restrictions.
US Savings Securities
United States Savings Securities refers to a group of securities consisting of savings stamps and bonds (most notably United States Savings Bonds) aimed at small, non-institutional investors.

A number of nonmarketable securities fall outside these series. The special issue securities sold to the Federal Reserve in 1917 (the first securities recognized as nonmarketable) and mentioned above do not fit into any of these categories, neither do securities providing tax advantages like Mortgage Guaranty Insurance Company Tax and Loss Bonds or Special Notes of the United States issued on behalf of the International Monetary Fund. Treasury reports are, in fact, frustratingly full of anomalies and contradictions. One major anomaly is Postal Savings Bonds. First issued in 1911, Postal Savings Bonds were United States Savings Securities that were bought by depositors in the now defunct Postal Savings System. These bonds, unlike United States Savings Bonds, were fully marketable and could be bought and sold on the open market. As a savings security, it is included in the nonmarketable United States Savings Security series even though it is marketable. (It is to include these anomalous securities that we begin the graphs below in 1910.)

The United States Savings Security Series and the Government Account Series were the most significant in the growth of the nonmarketable debt component of the public debt. (See Figure 10.) The real rise in savings securities began with the introduction of the nonmarketable United States Savings Bonds in 1935. The bond drives of World War II established these savings bonds in the American psyche and small investor portfolios. Securities issued for the benefit of government funds or programs began in 1925 and, as in the case of savings securities, really took off with the stimulus of World War II. The growth of government and government programs continued to stimulate the growth of the Government Account Series, making it the largest part of nonmarketable debt by 1975. (See Figure 13.)

Source: Various tables and exhibits, Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1910-1932); “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

The Depositary, REA, and SLG series were of minor importance throughout the period with depositary bonds declining because their fixed interest rate of 2% became increasing uncompetitive with the rise in inflation. (See Figure 11.) As the Investment Series was tied to a single security, it declined with the gradual redemptions of Treasury Bond, Investment Series securities. (See Figure 12.) The Foreign Government Series grew with escalating efforts to stabilize the value of dollar in foreign exchange markets. (See Figure 12.)

Source: “Description of Public Debt Issues Outstanding, June 30, 1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 88-112.

History of the Public Debt

While we have examined the development of the various components of the public debt, we have yet to consider the public debt as a whole. Quite a few writers in the recent past have commented on the ever-growing size of the public debt. Many were concerned that the public debt figures were becoming astronomical in size and that there was no end in sight to continued growth as perennial budget deficits forced the government to keep borrowing money. Such fears are not entirely new to our country. In the Civil War, World War I, and World War II, people were astounded at the unprecedented heights reached by the public debt during wartime. What changed during World War II (and maybe a bit before) was the assumption that the public debt would decrease once the present crisis was over. The pattern in America’s past was that after each war every effort would be made to pay off the accumulated debt as quickly as possible. Thus we find after the Civil War, World War I, and World War II declines in the total public debt. (See Figures 14 and 15.) Until the United States’ entry into World War I, the public debt never exceeded $3 billion (See Figure 14); and probably the debt would have returned near to this level after World War I if the Great Depression and World War II had not intervened. Yet, the last contraction of the public debt between 1861 and 1975 occurred in 1957. (See Figures 15 and 18.) Since then the debt grew at an ever-increasing rate. Why?

The period 1861 to 1975 roughly divides into two eras and two corresponding philosophies on the public debt. From 1861 to 1932, government officials basically followed traditional precepts of public debt management, pursuing balanced budgets and paying down any debt as quickly as possible (Withers, 35-42). We will label these officials traditionalists. To oversimplify, for traditionalists the economy was not to be meddled with by the government as no good would come from it. The ups and downs of business cycles were natural phenomena that had to be endured and when possible provided for through the accumulation of budget surpluses. These views of national finance and the public debt held sway before the Great Depression and lingered on into the 1950s (Conklin, 234). But it was during the Great Depression and the first term of President Franklin Roosevelt, that we see an acceptance of what was then called “new economics” and would later be called Keynesianism. Basically, “new” economists believed that the business cycle could be counteracted through government intervention into the economy (Withers, 32). During economic downturns, the government could dampen the down cycle by stimulating the economy through lower taxes, increased government spending, and an expanded money supply. As the economy recovered, these stimulants would be reversed to dampen the up cycle of the economy. These beliefs gained ever greater currency over time and we will designate the period 1932 to 1975, the New Era.

The Traditional Era, 1861-1932

(This discussion focuses on figures 14 and 16. Also See Figures 18, 19, and 20.) In 1861, the public debt stood at roughly $65 million. At the end of the Civil War the debt was some 42 times greater at $2,756 million and the country was off the gold standard. The Civil War was paid for by a new personal income tax, massive bond issues, and the printing of currency, popularly known as Greenbacks. Once the war was over, there was a drive to return to the status quo antebellum with a return to the gold standard, a pay down of the public debt, and the retirement of Greenbacks. The period 1866 to 1893, saw 28 continuous years of budget surpluses with revenues pouring in from tariffs and land sales in the west. During that time, successive Secretaries of the Treasury redeemed public debt securities to the greatest extent possible, often buying securities at a premium in the open market. The debt declined continuously until 1893 to a low of $961 million with a brief exception in the late 1870s as the country dealt with the recessionary after effects of the Panic of 1873 and the controversy regarding resumption of the gold standard in 1879. The Panic of 1893 and a decline in tariff revenues brought a period of budget deficits and slightly raised the public debt from its 1893 low to a steady average of around $1,150 million in the years leading up to World War I. The first war drives occurred during World War I. With the aid of the recently established Federal Reserve, the Treasury held four Liberty Loan drives and one Victory Loan drive. The Treasury also introduced low cost savings certificates and stamps to attract the smallest investor. For 25 cents, one could aid the war effort by buying a Thrift Stamp. As at the end of previous wars, once World War I ended there was a concerted drive to pay down the debt. By 1931, the debt was reduced to $16,801 million from a wartime high of $25,485 million. The first budget deficit since the end of the war also appeared in 1931, marking the deepening of the Great Depression and a move away from the fiscal orthodoxy of the past.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63.

Source: Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

The New Era, 1932-1975

(This discussion focuses on figures 15 and 17. Also See Figures 18, 19, and 20.) It was Roosevelt who first experimented with deficit spending to pull the economy out of depression and to stimulate jobs through the creation of public works programs and other elements of his New Deal. Though taxes were raised on the wealthy, the depressed state of the economy meant government revenues were far too low to finance the New Deal. As a result, Roosevelt in his first year created a budget deficit almost six times greater than that of Hoover’s last year in office. Between 1931 and 1941, the public debt tripled in size, standing at $48,961 million upon the United States’ entry into World War II. To help fund the debt and get hoarded money back into circulation, the Treasury introduced the United States Savings Bond. Nonmarketable with a guaranteed redemption value at any point in the life of the security and a denomination as low as $25, the savings bond was aimed at small investors fearful of continued bank collapses. With the advent of war, these bonds became War Savings Bonds and were the focus of the eight war drives of World War II, which also included Treasury bonds and certificates of indebtedness. The public debt reached a height of $269,422 million because of the war.

The experience of the New Deal combined with the low unemployment and victory of wartime, seemed to confirm Keynesian theories and reduce the fear of budget deficits. In 1946, Congress passed the Full Employment Act, committing the government to the pursuit of low unemployment through government intervention in the economy, which could include deficit spending. Though Truman and Eisenhower promoted some government intervention in the economy, they were still economic traditionalists at heart and sought to pay down the public debt as much as possible. And, despite massive foreign aid, a sharp recession in the late 1950s, and large-scale foreign military deployments, including the Korean War, these two presidents were able to present budget surpluses more than 50% of the time and limit the growth of the public debt to an average of $1,000 million per year. From 1960 to 1975, there would only be one year of budget surplus and the public debt would grow at an average rate of $17,040 million per year. It was in 1960 and the arrival of the Kennedy administration that the “new economics” or Keynesianism came into full flower within the government. In the 1960s and 1970s, tax cuts and increased domestic spending were pursued not only to improve society but also to move the economy toward full employment. However, these economic stimulants were not just applied on down cycles of the economy but also on up cycles, resulting in ever-growing deficits. Added to this domestic spending were the continued outlays on military deployments overseas, including Vietnam, and borrowings in foreign markets to prop up the value of the dollar. During boom years, government revenues did increase but never enough to outpace spending. The exception was 1969 when a high rate of inflation boosted nominal revenues which were offset by the increased nominal cost of servicing the debt. By 1975, the United States was suffering from the high inflation and high unemployment of stagflation, and the budgetary deficits seemed to take on a life of their own. Each downturn in the economy brought smaller revenues aggravated by tax cuts while spending soared because of increased welfare and unemployment benefits and other government spending aimed at spurring job creation. The net result was an ever-increasing charge on the public debt and the huge numbers that have concerned so many in the past (and present).

Source: Nominal figures from “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63; real figures adjust for inflation and are provided by Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahrhome.htm.

Source: Derived from figures provided by Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

Source: Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

We end this study in 1975 and the passage of the Budget Control Act. Formally entitled the Congressional Budget and Impoundment Control Act of 1974, it was passed on July 12, 1974 (the start of fiscal year 1975). Some of the most notable provisions of the act were the establishment of House and Senate Budget Committees, creation of the Congressional Budget Office, and removal of impoundment authority from the President. Impoundment was the President’s ability to refrain from spending funds authorized in the budget. For example, if a government program ended up not spending all the money allotted it, the President (or more specifically the Treasury under the President’s authority) did not have to pay out the unneeded money. Or, if the President did not want to fund a project passed by Congress in the budget, he could in effect veto it by instructing the Treasury not to release the money. In sum, the Budget Control Act shifted the balance of budgetary power to the Congress from the executive branch. The effect was to weaken restraints on Congressional spending and contribute to the increased deficits and sharp, upward growth in the public debt in the next couple decades. (See Figures 1, 19, and 20.)

But the Budget Control Act was a watershed for the public debt not only in its rate of growth but also in the way it was recorded and reported. The act changed the fiscal year (the twelve-month period used to determine income and expenses for accounting purposes) from July 1 to June 30 of each year to October 1 to September 30. The Budget Control Act also initiated the reporting system currently used by the Bureau of the Public Debt to report on the public debt. Fiscal year 1975 saw the first publication of the Monthly Statement of the Public Debt of the United States. For the first time, it reported the public debt in the structure we examined above, a structure still used by the Treasury today.

Conclusion

The public debt from 1861 to 1975 was the product of many factors. First, it was the result of accountancy on the part of the United States Treasury. Only certain obligations of the United States fall into the definition of the public debt. Second, the debt was the effect of Treasury debt management decisions as to what debt instruments or securities were to be used to finance the debt. Third, the public debt was fundamentally a product of budget deficits. Massive government spending in itself did not create deficits and add to the debt. It was only when revenues were not sufficient to offset the spending that deficits and government borrowing were necessary. At times, as during wartime or severe recessions, deficits were largely unavoidable. The change that occurred between 1861 and 1975 was the attitude among the government and the public toward budget deficits. Until the Great Depression, deficits were seen as injurious to the public good, and the public debt was viewed with unease as something the country could really do without. After the Great Depression, deficits were still not welcomed but were now viewed as a necessary tool needed to aid in economic recovery and the creation of jobs. Post-World War II rising expectations of continuous economic growth and high employment at home and the extension of United States’ power abroad spurred the use of deficit spending. And, the belief among some influential Keynesians that more tinkering with the economy was all that was needed to fix a stagflating economy created an almost self-perpetuating growth of the public debt. In the end, the history of the public debt is not so much about accountancy or Treasury securities as about national ambitions, politics, and economic theories.

Annotated Bibliography

Though much has been written about the public debt, very little of it is of any real use in economic analysis or learning the history of the public debt. Most books deal with an ever-pending public debt crisis and give policy recommendations on how to solve the problem. However, there are a few recommendations:

Annual Report of the Secretary of the Treasury on the State of the Finances. Washington, DC: Government Printing Office, -1980.

This is the basic source for all information on the public debt until 1980.

Bayley, Rafael A. The National Loans of the United States from July 4, 1776, to June 30, 1880. Second edition. Facsimile reprint. New York: Burt Franklin, 1970 [1881].

This is the standard work on early United States financing written by a Treasury bureaucrat.

Bureau of the Public Debt. “The Public Debt Online.” URL: http://www.publicdebt.treas. gov/opd/opd.htm.

Provides limited data on the public debt, but provides all past issues of the Monthly Statement of the Public Debt.

Conklin, George T., Jr. “Treasury Financial Policy from the Institutional Point of View.” Journal of Finance 8, no. 2 (May 1953): 226-34.

This is a contemporary’s disapproving view of the growing acceptance of the “new economics” that appeared in the 1930s.

Gordon, John Steele. Hamilton’s Blessing: the Extraordinary Life and Times of Our National Debt. New York: Penguin, 1998.

This is a very readable, brief overview of the history of the public debt.

Love, Robert A. Federal Financing: A Study of the Methods Employed by the Treasury in Its Borrowing Operations. Reprint of 1931 edition. New York: AMS Press, 1968.

This is the most complete and thorough account of the structure of the public debt. Unfortunately, it only goes up to 1925.

Noll, Franklin. A Guide to Government Obligations, 1861-1976. Unpublished ms. 2004.

This is a descriptive inventory and chronological listing of the roughly 12,000 securities issued by the Treasury between 1861 and 1976.

Office of Management and Budget. “Historical Tables.” Budget of the United States Government, Fiscal Year 2005. URL: http://www.whitehouse.gov/omb/budget/fy2005/ pdf/hist.pdf.

Provides data on the public debt, budgets, and federal spending, but reports focus on the latter twentieth century.

Sahr, Robert. “National Government Budget.” URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahr.htm.

This is a valuable web site containing a useful collection of detailed graphs on government spending and the public debt.

Withers, William. The Public Debt. New York: John Day Company, 1945.

Like Conklin, this is a contemporary’s view of the change in perspectives on the public debt occurring in the 1930s. Withers tends to favor the “new economics.”

Citation: Noll, Franklin. “The United States Public Debt, 1861 to 1975”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-united-states-public-debt-1861-to-1975/

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:
Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;
Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Adams, Sean Patrick. Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America. Baltimore: Johns Hopkins University Press, 2004.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The

Citation: Adams, Sean. “US Coal Industry in the Nineteenth Century”. EH.Net Encyclopedia, edited by Robert Whaples. January 23, 2003. URL http://eh.net/encyclopedia/the-us-coal-industry-in-the-nineteenth-century/

The Bus Industry in the United States

Margaret Walsh, University of Nottingham

Despite its importance to everyday life, historians have paid surprisingly little attention to modern road transportation. There have been some valuable studies of the automobile, its production and its impact on society and the economy. This article surveys the history of a branch of modern transportation that has been almost completely ignored — the history of motorized buses.

Missing from History

Why has there been such neglect? Part of the explanation lies in the image problem. As the slowest form of motorized transportation and as the cheapest form of public transportation buses have, since the middle of the twentieth century, been perceived as the option of those who cannot afford to travel by car, train or plane. They have thus become associated with the young, the elderly, the poor, minority groups and women. Historians have avoided contact with bus history as they have avoided contact with bus travel. They have preferred to pay attention to trains and rail companies especially those of the nineteenth century. Particularly in the United States where rail service has become geographically very limited an ethos of pathos and romance is still associated with the ‘Iron Horse.’ Indeed there is an inverse relationship between the extent of academic and enthusiast knowledge and the use of modes of transportation. But perhaps of equal importance in encouraging rail and air travel research and writing is the maintenance of business records. These materials have been made available in either public or company depositories and they offer ample evidence to write splendid volumes, whether as corporate histories or as general interest reading. Bus records have not been easily accessible. Neither of the two major American bus carriers, Greyhound and Trailways, has an available corporate archive. Their historical materials deposited elsewhere have been scattered and haphazard. Other company archives are few in numbers and thin in volume. Bus information seems to be as scarce as bus passengers in recent times. Nevertheless enough materials do exist to demonstrate that the long-distance bus industry has offered a useful service and deserves to have its place in the nation’s history recognized.

The statistics on intercity passenger services provide the framework for understanding the growth and position of the motor bus in the United States. In 1910 railroad statistics were the only figures worthy of note. With 240,631 miles of rail track in operation trains provided a network capable of bringing the nation together. In the second decade of the twentieth century, however, the automobile, now being mass-produced, became more readily available and in the 1920s it became popular with one car per 6.6 persons. Then two other motor vehicles, the bus and the truck emerged in their own right and even the plane offered some pioneering passenger trips. As Table 1 documents, by 1929 when figures for the distribution of intercity travel become available, the train had already lost out to the auto, though it retained its dominance as a public carrier. For most of the remainder of the century, except for the gasoline shortages during the Second World War, the private automobile accounted for over eighty percent of domestic intercity travel.

Table 1

 

Intercity Travel in the United States by Mode

(Billions of Passenger Miles, 1929-1999)

 

 

PRIVATE CARRIER PUBLIC CARRIER
TOTAL INTERCITY TRAVEL TOTAL (1) AUTOMOBILE AIR TOTAL (1) (2) BUS RAIL AIR
Year Amount % Amount % Amount % Amount % Amount % Amount % Amount % Amount %
1929 216.0 100 175.0 81.0 175.0 81.0 40.9 18.9 7.1 3.3 32.5 15.0
1934 219.0 100 191.0 87.2 191.0 87.2 27.5 12.6 7.4 3.4 18.8 8.6 0.2 0.1
1939 309.5 100 275.5 89.0 275.4 89.0 0.1 34.0 11.0 9.5 3.1 23.7 7.7 0.8 0.3
1944 309.3 100 181.4 58.6 181.4 58.6 127.9 41.4 27.3 8.8 97.7 31.6 2.9 0.9
1949 478.0 100 410.2 85.8 409.4 85.6 0.8 0.2 67.8 14.2 24.0 5.0 36.0 7.5 7.8 1.6
1954 668.2 100 598.5 89.6 597.1 89.4 1.4 0.2 69.7 10.4 22.0 3.3 29.5 4.4 18.2 2.7
1959 762.8 100 689.5 90.4 687.4 90.1 2.1 0.3 73.3 9.6 20.4 2.7 22.4 2.9 30.5 4.0
1964 892.7 100 805.5 90.2 801.8 89.8 3.7 0.4 87.2 9.8 23.3 2.6 18.4 2.1 45.5 5.1
1969 1134.1 100 985.8 86.9 977.0 86.1 8.8 0.8 148.3 13.1 24.9 2.2 12.3 1.1 111.1 9.8
1974 1306.7 100 1133.1 86.7 1121.9 85.9 11.2 0.9 173.6 13.3 27.7 2.1 10.5 0.8 135.4 10.4
1979 1511.8 100 1259.8 83.3 1244.3 82.3 15.5 1.0 252.0 16.7 27.7 1.8 11.6 0.8 212.7 14.1
1984 1576.5 100 1290.4 81.9 1277.4 81.0 13.0 0.8 286.1 18.2 24.6 1.6 10.8 0.7 250.7 15.9
1989 1936.0 100 1563.9 80.8 1550.8 80.1 13.1 0.7 372.3 19.2 24.0 1.2 13.1 0.7 335.2 17.3
1994 2065.0 100 1634.6 79.2 1624.8 78.7 9.8 0.5 430.4 20.9 28.1 1.4 13.9 0.7 388.4 18.8
1999 2400.2 100 1863.4 77.6 1849.9 77.1 13.5 0.6 536.8 22.3 34.7 1.4 14.2 0.6 487.9 20.3

Sources: National Association of Motor Bus Operators. Bus Facts. 1966, pp. 6, 8; F. A. Smith, Transportation in America: Historical Compendium, 1939-1985. Washington DC: Eno Foundation for Transportation, 1986, p. 12; F. A. Smith, Transportation in America: A Statistical Analysis of Transportation in the United States. Washington DC: Eno Foundation for Transportation, 1990, p. 7; and Rosalyn A. Wilson, Transportation in America: Statistical Analysis of Transportation in the United States, eighteenth edition, with Historical Compendium, 1939-1999. Washington, DC: Eno Transportation Foundation, 2001, pp. 14-15.

(1) Percentages do not always sum to 100 because of rounding up.

(2) Early figures take count of waterways as well as railroads, buses and airlines.

Although intercity bus travel climbed from nothing to over seven billion passenger miles in 1929, it was always the choice of a relatively small number of people. Following modest growth in the 1930s, ridership soared during World War II, peaking just above 27 billion passenger miles and attaining its highest-ever share of the market. After World War II, as intercity rail ridership plummeted, intercity bus ridership dropped by much less. Measured in billions of passenger miles, bus ridership plateaued in the last half of the twentieth century at a level close to its World War II peak. However, its share of the market continued to fall, decade by decade. From the 1960s the faster and more comfortable jet plane offered better options for the long-distance traveler, but most Americans still chose to travel by land in their own automobiles.

No particular date marks the beginning of the American intercity or long-distance bus industry because so many individuals were attracted to it at a similar time when they perceived that they could make a profit by carrying fare-paying passengers over public highways. Early records suggest bus travel developed from being an adventure into a realistic business proposition in the second decade of the twentieth century when countless entrepreneurs scattered throughout the nation operated local services using automobile sedans, frequently known as ‘jitneys.’ Encouraged by their successes, ambitious pioneers in the 1920s developed longer networks either by connecting their routes with those of like-minded entrepreneurs or by buying out their rivals. They then needed to acquire larger, more comfortable and more reliable vehicles and to meet the requirements of state governments who imposed regulations for safety, competition, financing road construction and accounting procedures. Competition from the railroads threatened the well being of promising bus companies. Some railroads decided to run subsidiary bus operations in the hope of squeezing out motor carriers. Others preferred to attack bus entrepreneurs through a propaganda campaign claiming that buses were competing unfairly because they did not pay sufficient taxes for road use. Bus owners fought back, both verbally and practically. Those who had gained enough experience and expertise to organize their firms systematically took advantage of the flexibility of their vehicles that did not run on fixed tracks and of the lower running costs of coaches to provide a cheaper service. By the late 1920s regional bus lines were visible and the possibility of national lines suggested increased prospects.

The Impact of the Great Depression

The onset of the Great Depression, however, brought painful changes to this adolescent service sector. Many small carriers went out of business when passengers and ticket sales declined as unemployment grew and most Americans could not afford to travel. The larger companies, experiencing both a cash flow and capital shortage had to reorganize their financial and administrative structures and had to ensure system-wide economies in order to survive. The travails of the only burgeoning national enterprise, Greyhound, are instructive of the difficulties faced. Much of the corporation’s rapid expansion in the late 1920s had been financed by short-term loans, which could not be repaid as income fell. Two re-capitalization schemes in 1930 and in 1933 were essential to meet current obligations. These involved loans from banks, negotiations with General Motors and a re-floatation of shares. The corporation then took constructive as well as defensive action. It rationalized its divisional structure to become more competitive and continued to spend heavily on advertising and other media promotions. The strenuous efforts paid off and Greyhound not only survived, but also gained in market strength. Smaller firms with less credibility and credit worthiness struggled to remain solvent and were unable to expand while the disposable incomes of Americans remained low.

Federal Government Legislation

The federal government had expressed concern about the extent and shape of the developing long-distance bus industry before the Great Depression shattered the national economy. Starting in 1925 a series of forty bills calling for the regulation of motor passenger carriers came before Congress. Congressional hearings and two major investigations by the Interstate Commerce Commission (ICC) of the motor transport industry, in 1928 and 1932, made other suggestions for legislation, as did the Federal Coordinator of Transportation. But legislators felt under pressure from varied interest groups and were uncertain how to proceed. Emergency and short-term solutions came in the shape of the bus code of the National Industrial Recovery Act (NIRA) of 1933. But dissatisfaction with the code and the Supreme Court’s judgement about the unconstitutionality of the NIRA (1935) rallied support for specific legislation. The ensuing Motor Carrier Act (MCA) of 1935 entitled existing carriers to receive operating permits on filing applications and granted certificates to other firms only after an investigation or hearing which established that their business was in the public interest. Certificates could be suspended, changed or revoked. All interstate bus operators now had to conform to regulations governing safety, finance, insurance, accounting and records and they were required to consult the government over any rate changes.

Under the new regulations of the MCA competition between long-distance operators was limited. Existing companies who had filed for permits protested against applications from new competitors on their routes. If it was established that services were adequate and traffic was light, new applications were often turned down. The general thrust of the new policy supported larger companies, which more easily met federal government standards. The Greyhound Corporation, with its structure reorganized and already providing a national service, held a virtual monopoly of long-distance service in parts of the country. The administrative agency, the Motor Carrier Bureau (MCB) was well aware of both the potential abuse of monopoly power and the economies of scale achievable by larger operations. It thus encouraged an amalgamation of independent carriers to form a new nationwide system, National Trailways. Ironically this form of competition, which was officially encouraged in the bus industry, created a duopoly in many markets because most other operators were small companies that conducted much of their business in short-haul suburban and intra-regional transport. Influenced by historic concerns about regulating the railroads, the government had created a new public policy that insisted on competition within an industry even though that competition favored a small number of large firms. And even more ironically by the mid 1930s competition among different modes of transportation meant that there was little constructive thought given to a new national transportation policy that might coordinate these modes efficiently and effectively to use their natural advantages to best public effect.

For Better or Worse in the Second World War

War brought expansion to the bus industry, but under stressful conditions and with consequences that would have long-term implications. The need to carry both civilians and troops, combined with gasoline, rubber and parts shortages, forced Americans to move from their automobiles and onto public transportation. New records were set for passenger transportation. Seats were filled to capacity, with standing room only. Long-distance bus passenger miles doubled from 13.6 billion in 1941 to 26.9 billion in 1945. This business was not achieved in a free market. A wartime administrative bureau, the Office of Defense Transportation (ODT) created in December 1941 managed traffic flows throughout the war. It used relatively simple devices such as the rationing of parts, rubber allocation, speed limits, fuel control and the restriction of non-essential services to distribute scarce resources among transportation systems. Assisted by trade associations like the National Association of Motor Bus Operators (NAMBO), the ODT issued directives encouraging full capacity use and rational use of passenger operations.

Though bus companies abandoned competition with each other and with their long-standing rival, the railroads, they were unable to gain long-term benefits from their patriotic efforts to help win the war. Earnings rose, but it was impossible to invest part or all of these into the industry because of government curtailment of vehicle production and building construction activity. Hence buses were kept in service beyond their normal life expectancy and terminals were neither improved nor renovated. Speed limits of thirty-five miles per hour, imposed in 1942, created longer man-hours for drivers and lengthened journeys for passengers, already frustrated and tired by waiting in crowded terminals. Despite the industry’s wartime propaganda exhorting Americans either not to travel or to do so at off-peak times and to be patient for the good of the country, the unfavorable impressions of inconvenience and discomfort of traveling by bus remained with many patrons.

Emerging from the wartime conditions, bus managers considered that they could build on their increased business provided that they could both invest in new vehicles and buildings and could persuade Americans that buses offered many advantages over automobiles for long-distance travel. They were essentially optimistic about the future of their business. But they had not reckoned on either post-war inflation or on a lengthy federal government inquiry into the conduct of the industry. Funds accumulated during the war had been earmarked for investment in a variety of terminals and garages and for replacing and increasing rolling stock. New vehicles were ordered as soon as wartime restrictions were lifted, but not only were there delays in delivery due to shortages of materials and strikes in production plants, but these cost more than had been anticipated. The abandonment of effective wartime controls in 1946 brought rapid increases in prices and rents as consumers with huge pent-up savings chased scarce goods and housing. Older buses, which would typically have been retired, were retained. The double burden of depreciation charges of both new buses and restyled buses delayed the acquisition of more modern cruiser-type vehicles until the early 1950s. The normal investment in buildings was also held in check.

Post-war financial adjustments alone were not responsible for the slow progress towards modernization. The federal government inadvertently delayed infrastructure developments. The ICC was worried about the honest, efficient and cost-effective management of the intercity bus industry, its profit margins during and after the war and the lack of uniform bus fares. In July 1946 the agency instigated a comprehensive investigation of bus fares and charges in order to establish a fair national rate structure. The hearings concluded that the industry had conducted its affairs justly and that variations in fares were a result of local and regional conditions. In the future profit margins were to be established through a standard operating ratio, taken as the ratio of operating expenses to operating revenues. Bus operators were thus given a clean bill of health and a rate structure that suggested success in a competitive inter-modal marketplace. But the hearings were very lengthy, lasting until December 1949. During these years bus operators hesitated to take major decisions about future expansion. State governments also contributed to this climate of uncertainty. Multiple state registration fees and fuel taxes for vehicles crossing state boundaries increased both running and administrative costs for companies. Furthermore the lack of uniform size and weight limitations on vehicles between states had a negative influence on the selection of larger and more economical coaches and delayed the process of modernizing bus fleets. Entrepreneurs faced unusual problems in the post-war years, at a time when they needed to be forceful and dynamic.

These structural problems dominated bus company discussions at the expense of developing improved customer relations. Certainly time, effort and money were put into a vigorous advertising campaign telling the public that buses were available for both regular service and leisure time activities. The latter offered great potential as people had money in their pockets and desired recreation and entertainment. Advertisements emphasized the reliability, safety, flexibility and comfort of bus journeys while bus company employees were exhorted to develop a reputation for courtesy. But more proactive efforts were needed if new and old clients were to get on and stay on buses. The 25.8 million car registrations of 1945 had become 40.5 million by 1950 and then increased again to 52.1million in 1955. The United States had achieved mass ownership and automobility. The federal government encouraged this personal mobility by promoting the construction of interstate highways in the Federal-Aid Highway Act (Interstate Highways Act) of 1956. Certainly buses also benefited from new high-speed roads, but increasingly the private automobile won the contest for short-distance travel under four hundred miles. Americans preferred to drive themselves whether or not the total cost of personal travel was higher than that of public transport. They valued the convenience of their own vehicles and as more became suburban dwellers they were unwilling to go to bus terminals, often located in downtown city centers.

What could bus operators do to either conserve their position as passenger carriers or to advance this position? Efforts to improve management and internal company restructuring offered some possibilities while new publicity campaigns suggested other avenues for progress. The Greyhound Corporation, as the industry’s largest operator took the lead in adopting a modern professional appearance. In the mid 1950s it sought to raise efficiency by reducing divisional groupings from thirteen to seven, thereby making more effective use of equipment, procedures and personnel. Managers and mechanics now had to undergo systematic training, whether at business schools or in engineering technologies. Theoretical learning was a necessary complement to practical experience. But these administrative changes were insufficient by themselves. Increased trade was sought in transport-related outlets, for example, in carrying small freight and mail, in developing van lines and car rentals and in making connections with airlines to offer surface travel. The closure of many railroad routes offered opportunities to seize their business while road improvements and expansion created the possibility of new business. Yet more openings were envisaged as Greyhound and its major rival, Trailways, participated in the conglomerate movement. Greyhound, for example, not only ventured into bus and auxiliary transport services, but also moved into financial, food, consumer, pharmaceutical, equipment leasing and general activities. Trailways diversified into real estate, accident insurance, restaurants, car parking and ocean cargo shipping operations. The aim was to realize substantial benefits through exchange of clients and economies of scale.

The bus industry also adopted a fresh approach to consumer relations in the late 1950s and the 1960s. Again the Greyhound Corporation led the way. Its new advertising agency, Grey Advertising, developed a novel and long-lasting campaign using a real dog, ‘Lady Greyhound,’ rather than the traditional silhouette in bus publicity. The corporation was able to portray ‘Lady Greyhound’ as a caring and sharing personality as she gave press and radio ‘interviews,’ opened bus stations, civic events and charity functions and replied to the members of her fan club. The implications were that Greyhound and the bus industry were equally concerned ‘people.’ Greyhound also became the official bus line in the annual contest to find Mrs. America, a contest that emphasized homemaking skills. This promotion was clearly an effort to appeal to women who comprised the majority of the bus industry’s passengers. More dramatic was the contemporary 1960s campaign to attract the young, foreign visitors, those who did not drive and the poorer groups in society. ‘Go Greyhound and Leave the Driving to Us’ and the offer of up to ninety-nine days bus travel for $99.00 were attractive proposals. By now the bus industry was differentiating among its clients. There was a market for regular route travel among those who did not have access to an automobile or who preferred not to drive. This market could be increased as a result of specific offers if these were well publicized. There was also a potential market for specialized travel in the leisure sector. While middle-class Americans might not want to experience the inconvenience of scheduled journeys, they could be persuaded to charter a bus for special trips, for example, outings by the church choir and the youth club or to sports events and art galleries. They could also be persuaded to join a tour group, as the price of the vacation would ensure like-minded and similarly well-off company. Indeed charter and special services’ income rose during the 1960s.

Not all passengers chose the national bus lines. Indeed there was considerable variety among American bus companies. In some ways smaller companies felt at a disadvantage, but in other ways they clearly won out. Regional operatives, like Jefferson Lines in the Midwest or Peter Pan in New England and New York State remained primarily in transportation services. They operated regular routes on an interstate basis, with charter and special services providing important financial returns. Their durability in business was related to their local reputation for service and their standing, which they were able to exploit. Local companies like Badger Coaches in Madison and Milwaukee, Wisconsin or Wolf’s Bus Line of York Spring, Pennsylvania frequently relied on charter and special work, often within a two hundred mile radius. When they ran regular services, these were on intrastate routes. They frequently filled the gaps left by their larger counterparts. The bus industry was diversified.

The bus industry in the United States had always offered its services to a minority of the traveling public, but by the 1960s it had settled on catering to a smaller proportion of the nation’s travelers. For the rest of the century it would struggle to retain these customers. More people took the bus than took the train because the bus, as a flexible and relatively low cost vehicle, was able to serve more urban and rural communities and to serve them economically. But in an era which was punctuated by economic crises and rising energy prices, the federal government first intervened to protect a special interest group and then stepped out of managing transportation policy in the public interest concerns of communal values and social infrastructure. Though never acting consistently, it became more susceptible to the economic concerns of free market competition and the personal concerns of Americans as individuals. The bus industry thus faced serious problems in its efforts to provide a well-run and effective service in a nation dominated by automobile owners and air travelers.

By the 1970s the economic difficulties faced by buses and more urgently by trains resulted in public investigations. The crisis in public ground transportation emerged first on the railroads because freight had been cross subsidizing passengers for years and the companies had withdrawn from unprofitable passenger services whenever possible. Pressured by an active rail lobby and concerned to ensure a minimum route network, Congress intervened with a subsidy in 1970 and created the National Rail Passenger Corporation, better known as Amtrak, to run passenger operations. Though train services improved continuing federal subsidies were required. Intercity bus operators were outraged both by the creation of Amtrak and the ensuing cheaper rail fares and complained about unfair competition throughout the decade. Their efforts to remain competitive with their long-standing rival, especially in the busy northeastern corridor of the United States, proved to be very tough and revenue from the large bus operators dropped. Losses, however, were not solely due to railroad activities. Airlines continued to enlarge their share of long-distance travel, stimulated by greater use of wide-bodied jet aircraft that increased speed and fostered a relative decline in the price gap between plane and bus fares. At the same time automobile ownership and use continued to grow with over a third of American households possessing two or more vehicles. Competition from both public and private modes of transport became very intense.

This competition, however, could not fully explain the plight of the American bus industry. The troubled economic conditions of the 1970s required organizational readjustments. In a period marked by high unemployment and high inflation rates the bus industry found that its receipts did not match its higher production costs. Higher labor costs, significant increases in fuel costs and mounting charges for new vehicles meant that bus companies were unable to finance their operations from their profits. Outside investment funds were needed. But these were slow to materialize because the bus industry was perceived to be in difficulties. Both the trade association, the American Bus Association (ABA) and the major carriers discussed possible solutions including cutting labor costs, finding methods of increasing productivity, promoting marketing drives — both for regular route and special services — and taking on more small freight business. But these efforts were of no avail if the industry as a whole lacked federal government backing. Any improvements made by carriers needed to fit into a national transportation infrastructure that recognized the value of bus services as the only source of public transport in some communities. Individual travel and transportation decisions might be considered to be private decisions but they had public value and consequences. Two main policies were possible in the 1970s, supporting the bus industry financially within the existing transportation structure or altering the framework to stimulate more bus competition and thus hope to create greater efficiency.

The bus industry initially favored government financial assistance as the way forward. In congressional hearings in 1977 bus delegates proposed a revitalization strategy that included capital grants, operating subsidies, tax concessions and regulatory reform aimed in particular at rate flexibility. The Surface Transportation Assistance Act (1978) authorized limited funds in the hope of some industry recovery. But this assistance had only a temporary impact in the late 1970s because by then many government representatives, their advisors, economists and business managers, were more interested in altering public policy to non-government intervention, whether in terms of management, grants or planning. In an era of conservative politics the mood of the country moved in favor of free market enterprise. Within a few years much of the nation’s transport was partially deregulated. In 1978 the Airline Deregulation Act gave airlines considerable freedom in pricing policies and in entry to and exit from routes. In 1980 both trucks and railroads were substantially deregulated. In 1982 it was the turn of the buses. The Bus Regulatory Reform Act of that year did not completely deregulate industry, but it did noticeably lessen governmental authority. Entry into business was liberalized, state regulations about exit from unprofitable routes were eased and price flexibility was granted on fares.

The long-distance bus industry now faced a highly competitive transportation environment. Not only did companies engage in price warfare over potentially profitable bus routes while abandoning marginal routes, but they also had to contest for passengers with the new low-cost deregulated airlines and for package freight with trucks. Companies made considerable efforts to adjust to the new conditions by lowering prices, improving facilities, especially terminals, investing in new coaches, making rural connections with independent feeder lines and in establishing computer systems to assist with ticketing and routing. Their most contentious adjustment came in the area of industrial relations. Here the larger operations ran into difficulties. Facing competition from smaller companies who had hired cheaper labor, they needed to negotiate wage reductions and new conditions with their unionized work force. In 1982 Trailways Lines agreed to a settlement with the American Transit Union (ATU) that froze wages at a level already considerably lower than that of Greyhound who then sought similar wage reductions. Resistance led to a seven-week strike in 1983. But the resulting settlement was relatively short-lived. Negotiations for a new driver’s contact broke down and ended in more strike action in 1990. Violence followed as the company hired replacement drivers and continued to operate its buses. The ensuing costs of countering the violence together with reduced income from services instigated a financial crisis. Greyhound filed for bankruptcy under Chapter 11 in June 1990 to re-order its affairs. The restructured corporation emerged as a smaller operation able to compete in the deregulated world of transportation.

In the 1990s the long-distance bus industry reshaped itself to cater to a variety of markets. Composed of hundreds of operators, ranging from large to small, but primarily small, it remained an essential, albeit minor, part of the United States’ transportation network. Motor coaches provided regular route services to some 4000 communities and had the capacity to serve all groups of people with their leisure, charter, small package, airport and commuter services. They were a vital ingredient to rural life and offered important intermodal links. Indeed for the country as a whole buses carried more commercial passengers than any of their transportation rivals. As a flexible and reasonably priced means of travel they found a niche catering to specific groups in society for scheduled routes and another niche for leisure activities. Though perceived to offer a secondary form of transportation, the bus industry in fact has provided and continues to provide crucial services for many Americans.

Crandall, Burton B. The Growth of the Intercity Bus Industry. Syracuse: Syracuse University, 1954.

Jackson, Carlton. Hounds of the Road: A History of the Greyhound Bus Company. Bowling Green, OH: Bowling Green University Popular Press, 1984.

Meier, Albert E. and John P. Hoschek. Over the Road. A History of Intercity Bus Transportation in the United States. Upper Montclair, NJ: Motor Bus Society, 1975.

Schisgall, Oscar. The Greyhound Story. From Hibbing to Everywhere. Chicago: J.C. Ferguson, 1985.

Taff, Charles A. Commercial Motor Transportation. Homewood, IL: Richard D. Irving Inc., 1951; 7th edition, Centreville, MD: Cornell Maritime Press, 1986.

Thompson, Gregory L. The Passenger Train in the Motor Age. California’s Rail and Bus Industries, 1910-1941. Columbus: Ohio State University Press, 1993.

Walsh, Margaret. Making Connections. The Long-Distance Bus Industry in the USA . Aldershot, UK: Ashgate Publishing, 2000.

Citation: Walsh, Margaret. “The Bus Industry in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. January 27, 2003. URL http://eh.net/encyclopedia/the-bus-industry-in-the-united-states/

Bankruptcy Law in the United States

Bradley Hansen, Mary Washington College

Since 1996 over a million people a year have filed for bankruptcy in the United States. Most seek a discharge of debts in exchange for having their assets liquidated for the benefit of their creditors. The rest seek the assistance of bankruptcy courts in working out arrangements with their creditors. The law has not always been so kind to insolvent debtors. Throughout most of the nineteenth century there was no bankruptcy law in the United States, and most debtors found it impossible to receive a discharge from their debts. Early in the century debtors could have expected even harsher treatment, such as imprisonment for debt.

Table 1. Chronology of Bankruptcy Law in The United States, 1789-1978

Date Event
1789 The Constitution empowers Congress to enact uniform laws on the subject of bankruptcy.
1800 First bankruptcy law is enacted. The law allows only for involuntary bankruptcy of traders.
1803 First bankruptcy law is repealed amid complaints of excessive expenses and corruption.
1841 Second bankruptcy law is enacted in the wake of the Panics of 1837 and 1839. The law allows both voluntary and involuntary bankruptcy.
1843 1841 Bankruptcy Act is repealed, amid complaints about expenses and corruption.
1867 Prompted by demands arising from financial failures during the Panic of 1857 and the Civil War, Congress enacts the third bankruptcy law.
1874 The 1867 Bankruptcy Act is amended to allow for compositions.
1878 The 1867 Bankruptcy Law is repealed.
1881 The National Convention of Boards of Trade is formed to lobby for bankruptcy legislation.
1889 The National Convention of Representatives of Commercial Bodies is formed to lobby for bankruptcy legislation. The president of the Convention, Jay L. Torrey, drafts a bankruptcy bill.
1898 Congress passes a bankruptcy bill based on the Torrey bill.
1933-34 The 1898 Bankruptcy Act is amended to include railroad reorganization, corporate reorganization, and individual debtor arrangements.
1938 The Chandler Act amends the 1898 Bankruptcy Act, creating a menu of options for both business and non-business debtors.
1978 The 1898 Bankruptcy Act is replaced by The Bankruptcy Reform Act.

To say that there was no bankruptcy law in the United States for most of the nineteenth century is not to say that there were no laws governing insolvency or the collection of debts. Americans have always relied on credit and have always had laws governing the collection of debts. Debtor-creditor laws and their enforcement are important because they influence the supply and demand for credit. Laws that do not encourage the repayment of debts increase risk for creditors and reduce the supply of credit. On the other hand, laws that are too strict also have costs. Strict laws such as imprisonment for debt can discourage entrepreneurs from experimenting. Many of America’s most famous entrepreneurs, such as Henry Ford, failed at least once before making their fortunes.

Over the last two hundred years the United States has shifted from a legal regime that was primarily directed at the strict enforcement of debt contracts to one that provides numerous means to alter the terms of debt contracts. As the economy developed groups of people became convinced that strict enforcement of credit contracts was unfair, inefficient, contrary to the public interest, or simply not in their own self interest. Periodic financial crises in the nineteenth century generated demands for bankruptcy laws to discharge debts. They also led to the introduction of voluntary bankruptcy and the extension of the right to file for bankruptcy to all individuals. The expansion of interstate commerce in the late nineteenth century led to demands for a uniform and efficient bankruptcy law throughout the United States. The rise of railroads gave rise to a demand for corporate reorganization. The expansion of consumer credit in the twentieth century and the rise in consumer bankruptcy cases led to the introduction of arrangements into bankruptcy law, and continue to fuel demands for revision of bankruptcy law today.

Origins of American Bankruptcy Law

Like much of American law, the origins of both state laws for the collection of debt and federal bankruptcy law can be found in England. State laws are, in general, derived from common law procedures for the collection of debt. Under the common law a variety of procedures evolved to aid a creditor in collecting a debt. Generally, the creditor can obtain a judgment from a court for the amount that he is owed and then have a legal official seize some of the debtor’s property or wages to satisfy this judgement. In the past a defaulting debtor could also be placed in prison to coerce repayment. Bankruptcy law does not replace other collection laws but does supercede them. Creditors still use procedures such as garnishing a debtor’s wages, but if the debtor or another creditor files for bankruptcy such collection efforts are stopped.

Under the U.S. Constitution, adopted 1789, bankruptcy law became a federal law in the United States. There are two clauses of the Constitution that influenced the evolution of bankruptcy law. First, in Article One, Section Eight Congress was empowered to enact uniform laws on the subject of bankruptcy. Second, the Contract Clause prohibited states from passing laws that impair the obligation of contracts. Courts have generally interpreted these clauses so as to give wide latitude to the federal government to alter the obligations of debt contracts while restricting state governments. States, however, are not completely barred from altering the terms of contracts. In its 1827 decision on Ogden vs. Saunders the Supreme Court declared that states could pass laws that granted a discharge for debts that were incurred after the law was passed; however, a state discharge can not be binding on creditors who are citizens of other states.

The evolution of bankruptcy law in the United States can be divided into two periods. In the first period, which encompasses most of the nineteenth century, Congress enacted three laws in the wake of financial crises. In each case the law was repealed within a few years amid complaints of high costs and corruption. The second period begins in 1881 when associations of merchants and manufacturers banded together to form a national association to lobby for a federal bankruptcy law. In contrast to previous demands for bankruptcy law, which were prompted largely by crises, late nineteenth century demands for bankruptcy law were for a permanent law suited to the needs of a commercial nation. In 1898 the Act to Establish a Uniform System of Bankruptcy was enacted and the United States has had a bankruptcy law ever since.

The Temporary Bankruptcy Acts of 1800, 1841 and 1867

Congress first exercised its power to enact uniform laws on bankruptcy in 1800. The debates in the Annals of Congress are brief but suggest that the demand for the law arose from individuals who were in financial distress. The law was modeled after the English bankruptcy law of the time. The law applied only to traders. Creditors could file a bankruptcy petition against a debtor, the debtor’s assets would be divided on a pro rata basis among his creditors, and the debtor would receive a discharge. Although debtors could not file a voluntary bankruptcy petition, it was generally believed that many debtors asked a friendly creditor to petition them into the bankruptcy court so that they could obtain a discharge. The law was intended to remain in effect for five years. Complaints that the law was expensive to administer, that it was difficult and costly to travel to federal courts, and that the law provided opportunities for fraud led to its repeal after only two years. Similar complaints were to follow the passage of subsequent bankruptcy laws.

Bankruptcy law largely disappeared from national politics until the Panic of 1839. A few petitions and memorials were sent to Congress in the wake of the Panic of 1819, but no law was passed. The Panic of 1839 and the recession that followed it brought forward a flood of petitions and memorials for bankruptcy legislation. Memorials typically declared that many business people had been brought to ruin by economic conditions that were beyond their control not through any fault of their own. In the wake of the Panic, Whigs made the attack on Democratic economic policies and the passage of bankruptcy relief central parts of their platform. After gaining control of Congress and the Presidency, the Whigs pushed through the 1841 Bankruptcy Act. The law went into effect February 2, 1842.

Like its predecessor, the Bankruptcy Act of 1841 was short-lived. The law was repealed March 3, 1843. The rapid about-face on bankruptcy was the result of the collapse of a bargain between Northern and Southern Whigs. Democrats overwhelmingly opposed the passage of the Act and supported its repeal. Southern Whigs also generally opposed a federal bankruptcy law. Northern Whigs appear to have obtained the Southern Whigs votes for passage by agreeing to distribute the proceeds from the sales of federal lands to the states. A majority of Southern Whigs voted for passage but then reversed their votes the next year. Despite its short life, over 41,000 petitions for bankruptcy, most of them voluntary, were filed under the 1841 law.

The primary innovations of the Bankruptcy Act of 1841 were the introduction of voluntary bankruptcy and the widening of the scope of occupations that could use the law. With the introduction of voluntary bankruptcy, debtors no longer had to resort to the assistance of a friendly creditor. Unlike the previous law in which only traders could become bankrupts, under the 1841 Act traders, bankers, brokers, factors, underwriters, and marine insurers could be made involuntary bankrupts and any person could apply for voluntary bankruptcy.

After repeal of the Bankruptcy Act of 1841, the subject of bankruptcy again disappeared from congressional consideration until the Panic of 1857, when appeals for a bankruptcy law resurfaced. The financial distress caused to Northern merchants by the Civil War further fueled demands for bankruptcy legislation. Though demands for a bankruptcy law persisted throughout the War, considerable opposition also existed to passing a law before the War was over. In the first Congress after the end of the War, the Bankruptcy Act of 1867 was enacted. The 1867 Act was amended several times and lasted longer than its predecessors. An 1874 amendment added compositions to bankruptcy law for the first time. Under the composition provision a debtor could offer a plan to distribute his assets among his creditors to settle the case. Again, complaints of excessive fees and expenses led to the repeal of the Bankruptcy Act in 1878. Table 2 shows the number of petitions filed under the 1867 law between 1867 and 1872.

Table 2. Bankruptcy Petitions, 1867-1872

Year Petitions
1867 7,345
1868 29,539
1869 5,921
1870 4,301
1871 5,438
1872 6,074

Source: Expenses of Proceedings in Bankruptcy In United States Courts. Senate Executive Document 19 (43-1) 1580.

During the first three quarters of the nineteenth century the demand for bankruptcy legislation rose with financial panics and fell as they passed. Many people came to believe that the forces that brought people to insolvency were often beyond their control and that to give them a fresh start was not only fair but in the best interest of society. Burdened with debts they had no hope of paying they had no incentive to be productive, creditors would take anything they earned. Freed from these debts they could once again become productive members of society. The spread of the belief that debtors should not be subjected to the harshest elements of debt collection law can also be seen in numerous state laws enacted during the nineteenth century. Homestead and exemption laws declared property that creditors could not take. Stay and moratoria laws were passed during recessions to stall collection efforts. Over the course of the nineteenth century, states also abolished imprisonment for debt.

Demand For A Permanent Bankruptcy Law

The repeal of the 1867 Bankruptcy Act was followed almost immediately by a well-organized movement to obtain a new Bankruptcy law. A national campaign by merchants and manufacturers to obtain bankruptcy legislation began in 1881 when The New York Board of Trade and Transportation organized a National Convention of Boards of Trade.The participants at the Convention endorsed a bankruptcy bill prepared by John Lowell, a judge from Massachusetts. They continued to lobby for the bill throughout the 1880s.

After failing to obtain passage of the Lowell bill, associations of merchants and manufacturers met again in 1889. Under the name of The National Convention of Representatives of Commercial Bodies they held meetings in St. Louis and in Minneapolis. The president of the Convention, a lawyer and businessman named Jay Torrey, drafted a bill that the Convention lobbied for throughout the 1890s. The bill allowed both voluntary and involuntary petitions, though wage earners and farmers could not be made involuntary bankrupts. The bill was primarily directed at liquidation but did include a provision for composition. A composition had to be approved by a majority of creditors in both number and value. In a compromise with states’ rights advocates, the bill declared that exemptions would be determined by the states.

The merchants and manufacturers, who organized the conventions, provided credit to their customers whenever they delivered goods in advance of payment. They were troubled by three features of state debtor-creditor laws. First, the details of collection laws varied from state to state, forcing them to learn the laws in all the states in which they wished to sell goods. Second, many state laws discriminated against foreign creditors, that is, creditors who were not citizens of the state. Third, many of the state laws provided for a first-come, first-served distribution of assets rather than a pro rata division. With the first-come, first-served rule, the first creditor to go to court could claim all the assets necessary to pay his debts leaving the last to receive nothing. The first-come, first-served rule of collection tended to create incentives for creditors to race to be the first to file a claim. The effect of this rule was described by Jay Torrey: “If a creditor suspects his debtor is in financial trouble, he usually commences an attachment suit, and as a result the debtor is thrown into liquidation irrespective of whether he is solvent or insolvent. This course is ordinarily imperative because if he does not pursue that course some other creditor will.” Thus the law could actually precipitate business failures. As interstate commerce expanded in the late nineteenth century more merchants and manufacturers experienced these three problems

Merchants and manufacturers also found it easier to form a national organization in the late nineteenth century because of the growth of trade associations, boards of trade, chambers of commerce and other commercial organizations. By forming a national organization composed of businessmen’s associations from all over the country, merchants and manufacturers were able to act in unison in drafting a bankruptcy bill and lobbying for a bankruptcy bill. The bill they drafted not only provided uniformity and a pro rata distribution, but was designed to prevent the excessive fees and expenses that had been a major complaint against previous bankruptcy laws.

As early as 1884, the Republican Party supported the bankruptcy bills put forward by the merchants and manufacturers. A majority in both the Republican and Democratic parties supported bankruptcy legislation during the late nineteenth century. It took nearly twenty years to enact bankruptcy legislation because they supported different versions of bankruptcy law. The Democratic Party supported bills that were purely voluntary (creditors could not initiate proceedings) and temporary (the law would only remain in effect for a few years). The requirement that the law be temporary was crucial to Democrats because a vote for a permanent bankruptcy law would have been a vote for the expansion of federal power and against states’ rights, a central component of Democratic policy. Throughout the 1880s and 1890s, votes on bankruptcy split strictly along party lines. The majority of Republicans preferred the status quo to the Democrats bills and the majority of Democrats preferred the status quo to the Republican bills. Because control of Congress was split between the two parties for most of the last quarter of the nineteenth century neither side could force through their version of bankruptcy law. This period of divided government ended with the 55th Congress, in which the Bankruptcy Act of 1898 was passed.

Railroad Receivership and the Origins of Corporate Reorganization

The 1898 Bankruptcy Act was designed to aid creditors in liquidation of an insolvent debtor’s assets, but one of the important features of current bankruptcy law is the provision for reorganization of insolvent corporations. To find the origins of corporate reorganization one has to look outside the early evolution of bankruptcy law and look instead at the evolution of receiverships for insolvent railroads. A receiver is an individual appointed by a court to take control of some property, but courts in the nineteenth century developed this tool as a means to reorganize troubled railroads. The first reorganization through receivership occurred in 1846, when a Georgia court appointed a receiver over the insolvent Munroe Railway Co. and successfully reorganized it as the Macon and Western Railway. In the last two decades of the nineteenth century the number of receiverships increased dramatically; see Table 3. In theory, courts were supposed to appoint an indifferent party as receiver, and the receiver was merely to conserve the railroad while the best means to liquidate it was ascertained. In fact, judges routinely appointed the president, vice-president or other officers of the insolvent railway and assigned them the task of getting the railroad back on its feet. The object of the receivership was typically a sale of the railroad as a whole. But the sale was at least partly a fiction. The sole bidder was usually a committee of the bondholders using their bonds as payment. Thus the receivership involved a financial reorganization of the firm in which the bond and stock holders of the railroad traded in their old securities for new ones. The task of the reorganizers was to find a plan acceptable to the bondholders. For example, in the Wabash receivership of 1886, first mortgage bondholders ultimately agreed to exchange their 7 percent bonds for new ones of 5 percent. The sale resulted in the creation of a new railroad with the assets of the old. Often the transformation was simply a matter of changing “Railway” to “Railroad” in the name of the corporation. Throughout the late nineteenth and early twentieth centuries judges denied other corporations the right to reorganize through receivership. They emphasized that railroads were special because of their importance to the public.

Unlike the credit supplied by merchants and manufacturers, much of the debt of railroads was secured. For example, bondholders might have a mortgage that said they could claim a specific line of track if the railroad failed to make its bond payments. If a railroad became insolvent different groups of bondholders might claim different parts of the railroad. Such piecemeal liquidation of a business presented two problems in the case of railroads. First, many people believed that piecemeal liquidation would destroy much of the value of the assets. In his 1859 Treatise on the Law of Railways, Isaac Redfield explained that, “The railway, like a complicated machine, consists of a great number of parts, the combined action of which is necessary to produce revenue.” Second, railroads were regarded as quasi-public corporations. They were given subsidies and special privileges. Their charters often stated that their corporate status had been granted in exchange for service to the public. Courts were reluctant to treat railroads like other enterprises when they became insolvent and instead used receivership proceedings to make sure that the railroad continued to operate while its finances were reorganized.

Table 3. Railroad Receiverships, 1870-1897

Percentage of
Receiverships Mileage in Mileage put in
Year Established Receivership Receivership
1870 3 531 1
1871 4 644 1.07
1872 4 535 0.81
1873 10 1,357 1.93
1874 33 4,414 6.1
1875 43 7,340 9.91
1876 25 4,714 6.14
1877 33 3,090 3.91
1878 27 2,371 2.9
1879 12 1,102 1.27
1880 13 940 1.01
1881 5 110 0.11
1882 13 912 0.79
1883 12 2,041 1.68
1884 40 8,731 6.96
1885 44 7,523 5.86
1886 12 1,602 1.17
1887 10 1,114 0.74
1888 22 3,205 2.05
1889 24 3,784 2.35
1890 20 2,460 1.48
1891 29 2,017 1.18
1892 40 4,313 2.46
1893 132 27,570 15.51
1894 50 4,139 2.31
1895 32 3,227 1.78
1896 39 3,715 2.03
1897 21 1,536 0.83

Source: Swain, H. H. “Economic Aspects of Railroad Receivership.” Economic Studies 3, (1898): 53-161.

Depression Era Bankruptcy Reforms

Reorganization and bankruptcy were brought together by the amendments to the 1898 Bankruptcy Act during the Great Depression. By the late 1920s, a number of problems had become apparent with both the bankruptcy law and receivership. Table 4 shows the number of bankruptcy petitions filed each year since the law was enacted. The use of consumer credit expanded rapidly in the 1920s and so did wage earner bankruptcy cases. As Table 5 shows, voluntary bankruptcy by wage earners became an increasingly large proportion of bankruptcy petitions. Unlike mercantile bankruptcy cases, in many wage earner cases there were no assets. Expecting no return, many creditors paid little attention to bankruptcy cases and corruption spread in the bankruptcy courts. An investigation into bankruptcy in the southern district of New York recorded numerous abuses and led to the disbarment of of more than a dozen lawyers. In the wake of the investigation President Hoover appointed Thomas Thacher to investigate bankruptcy procedure in the United States. The Thacher Report recommended that an administrative staff be created to oversee bankruptcies. The bankruptcy administrators would be empowered to investigate bankrupts and reject requests for discharge. The report also suggested that many debtors could pay their debts if given an opportunity to work out an arrangement with their creditors. It suggested that procedures for the adjustment or extension of debts be added to the law. Corporate lawyers also identified three problems with the corporate receiverships. First, it was necessary to obtain an ancillary receivership in each federal district in which the corporation had assets. Second, some creditors might try to withhold their approval of a reorganization plan in exchange for a better deal for themselves. Third, judges were unwilling to apply reorganization through receivership to corporations other than railroads. Consequently, the Thacher report suggested that procedures for corporate reorganization also be incorporated into bankruptcy law.

Table 4. Bankruptcy Petitions Filed, 1899-1997

Petitions per Percentage
Year Voluntary Involuntary Total 10,000 Population Involuntary
1899 20,994 1,452 22,446 3.00 6.47
1900 20,128 1,810 21,938 2.88 8.25
1901 17,015 1,992 19,007 2.45 10.48
1902 16,374 2,108 18,482 2.33 11.41
1903 14,308 2,567 16,875 2.09 15.21
1904 13,784 3,298 17,082 2.08 19.31
1905 13,852 3,094 16,946 2.02 18.26
1906 10,526 2,446 12,972 1.52 18.86
1907 11,127 3,033 14,160 1.63 21.42
1908 13,109 4,709 17,818 2.01 26.43
1909 13,638 4,380 18,018 1.99 24.31
1910 14,059 3,994 18,053 1.95 22.12
1911 14,907 4,431 19,338 2.06 22.91
1912 15,313 4,432 19,745 2.07 22.45
1913 16,361 4,569 20,930 2.15 21.83
1914 17,924 5,035 22,959 2.32 21.93
1915 21,979 5,653 27,632 2.75 20.46
1916 23,027 4,341 27,368 2.68 15.86
1917 21,161 3,677 24,838 2.41 14.80
1918 17,261 3,124 20,385 1.98 15.32
1919 12,035 2,013 14,048 1.34 14.33
1920 11,333 2,225 13,558 1.27 16.41
1921 16,645 6,167 22,812 2.10 27.03
1922 28,879 9,286 38,165 3.47 24.33
1923 33,922 7,832 41,754 3.73 18.76
1924 36,977 6,542 43,519 3.81 15.03
1925 39,328 6,313 45,641 3.94 13.83
1926 40,962 5,412 46,374 3.95 11.67
1927 43,070 5,688 48,758 4.10 11.67
1928 47,136 5,928 53,064 4.40 11.17
1929 51,930 5,350 57,280 4.70 9.34
1930 57,299 5,546 62,845 5.11 8.82
1931 58,780 6,555 65,335 5.27 10.03
1932 62,475 7,574 70,049 5.61 10.81
1933 56,049 6,207 62,256 4.96 9.97
1934 58,888 4.66
1935 69,153 5.43
1936 60,624 4.73
1937 55,842 1,643 57,485 4.46 2.86
1938 55,137 2,169 57,306 4.41 3.78
1939 48,865 2,132 50,997 3.90 4.18
1940 43,902 1,752 45,654 3.46 3.84
1941 47,581 1,491 49,072 3.69 3.04
1942 44,366 1,295 45,661 3.41 2.84
1943 30,913 649 31,562 2.35 2.06
1944 17,629 277 17,906 1.35 1.55
1945 11,101 264 11,365 0.86 2.38
1946 8,293 268 8,561 0.61 3.13
1947 9,657 697 10,354 0.72 6.73
1948 13,546 1,029 14,575 1.00 7.06
1949 18,882 1,240 20,122 1.35 6.16
1950 25,263 1,369 26,632 1.76 5.14
1951 26,594 1,099 27,693 1.81 3.97
1952 25,890 1,059 26,949 1.73 3.93
1953 29,815 1,064 30,879 1.95 3.45
1954 41,335 1,398 42,733 2.65 3.27
1955 47,650 1,249 48,899 2.98 2.55
1956 50,655 1,240 51,895 3.10 2.39
1957 60,335 1,189 61,524 3.61 1.93
1958 76,048 1,413 77,461 4.47 1.82
1959 85,502 1,288 86,790 4.90 1.48
1960 94,414 1,296 95,710 5.43 1.35
1961 124,386 1,444 125,830 6.99 1.15
1962 122,499 1,382 123,881 6.77 1.12
1963 128,405 1,409 129,814 6.99 1.09
1964 141,828 1,339 143,167 7.60 0.94
1965 149,820 1,317 151,137 7.91 0.87
1966 161,840 1,165 163,005 8.42 0.72
1967 173,884 1,241 175,125 8.95 0.71
1968 164,592 1,001 165,593 8.39 0.60
1969 154,054 946 155,000 7.77 0.61
1970 161,366 1,085 162,451 8.07 0.67
1971 167,149 1,215 168,364 8.26 0.72
1972 152,840 1,094 153,934 7.33 0.71
1973 144,929 985 145,914 6.89 0.68
1974 156,958 1,009 157,967 7.39 0.64
1975 208,064 1,266 209,330 9.69 0.60
1976 207,926 1,141 209,067 9.59 0.55
1977 180,062 1,132 181,194 8.23 0.62
1978 167,776 995 168,771 7.58 0.59
1979 182,344 915 183,259 8.14 0.50
1980 359,768 1,184 360,952 15.85 0.33
1981 358,997 1,332 360,329 15.67 0.37
1982 366,331 1,535 367,866 15.84 0.42
1983 373,064 1,670 374,734 15.99 0.45
1984 342,848 1,447 344,295 14.57 0.42
1985 362,939 1,597 364,536 15.29 0.44
1986 476,214 1,642 477,856 19.86 0.34
1987 559,658 1,620 561,278 23.12 0.29
1988 593,158 1,409 594,567 24.27 0.24
1989 641,528 1,465 642,993 25.71 0.23
1990 723,886 1,598 725,484 29.03 0.22
1991 878,626 1,773 880,399 34.85 0.20
1992 971,047 1,443 972,490 38.08 0.15
1993 917,350 1,384 918,734 35.60 0.15
1994 844,087 1,170 845,257 32.43 0.14
1995 856,991 1,113 858,104 32.62 0.13
1996 1,040,915 1,195 1,042,110 39.26 0.11
1997 1,315,782 1,217 1,316,999 49.16 0.09

Sources: 1899-1938 Annual Report of the Attorney General of the United States; 1939-1997; and Statistical Abstract of the United States. Various years. The Report of the Attorney General did not provide the numbers voluntary and involuntary from 1934-36.

Table 5. Wage Earner Bankruptcy and No Asset Cases, 1899-1933

Percentage of Cases
Year Wage Earners With No Assets
1899 5,288 51.12
1900 7,516 40.52
1901 7,068 48.99
1902 6,859 47.25
1903 4,852 41.36
1904 5,291 40.55
1905 5,426 40.75
1906 2,748 42.29
1907 3,257 42.11
1908 3,492 40.29
1909 3,528 38.46
1910 4,366 36.49
1911 4,139 48.14
1912 4,161 50.70
1913 4,863 49.63
1914 5,773 49.96
1915 6,632 49.88
1916 6,418 53.29
1917 7,787 57.12
1918 8,230 57.05
1919 6,743 64.53
1920 5,601 67.41
1921 5,897 65.66
1922 7,550 52.70
1923 10,173 61.10
1924 13,126 62.17
1925 14,444 61.23
1926 16,770 64.02
1927 18,494 64.86
1928 21,510 63.19
1929 25,478 67.34
1930 28,979 68.44
1931 29,698 69.15
1932 29,742 66.25
1933 27,385 62.76

Sources: 1899-1938 Annual Report of the Attorney General of the United States; 1939-1997; and Statistical Abstract of the United States. Various years. The Report of the Attorney General did not provide the numbers voluntary and involuntary from 1934-36.

In 1933, Congress enacted amendments that allowed farmers and wage earners to seek arrangements. Arrangements offered more flexibility than compositions. Debtors could offer to pay all or part of their debts over a longer period of time. Congress also added section 77, which provided for railroad reorganization. Section 77 solved two of the problems that had plagued corporate reorganization. Bankruptcy courts had jurisdiction of the assets throughout the country so that ancillary receiverships were not needed. The amendment also alleviated the holdout problem by making 2/3 votes of a class of creditors binding on all the members of the class. In 1934, Congress extended reorganization to non-railroad corporations as well. The Thacher Report’s recommendations for a bankruptcy administrator were not enacted, largely because of opposition from bankruptcy lawyers. The 1898 Bankruptcy Act had created a well-organized group with a vested interest in the evolution of the law–bankruptcy lawyers.

Although the 1933-34 reforms were ones that bankruptcy lawyers and judges had wanted, many of them believed that the law could be further improved. In 1932, The Commercial Law League, the American Bar Association, the National Association of Credit Management and the National Association of Referees in Bankruptcy joined together to form the National Bankruptcy Conference. The culmination of their efforts was the Chandler Act of 1938. The Chandler Act created a menu of options for both individual and corporate debtors. Debtors could choose traditional liquidation. They could seek an arrangement with their creditors through Chapter 10 of the Act. They could attempt to obtain an extension through Chapter 12. A corporation could seek an arrangement through Chapter 11 or reorganization through Chapter 10. Chapter 11 only allowed corporations to alter their unsecured debt, whereas Chapter 10 allowed reorganization of both secured and unsecured debt. However, corporations tended to prefer Chapter 11 because Chapter 10 required Securities and Exchange Commission review for all publicly traded firms with more than $250,000 in liabilities.

By 1938 modern American bankruptcy law had obtained its central features. The law dealt with all types of individuals and businesses. It allowed both voluntary and involuntary petitions. It enabled debtors to choose liquidation and a discharge, or to choose some type of readjustment of their debts. By 1939, the vast majority of bankruptcy cases were, as they are now, voluntary consumer bankruptcy cases. After 1939 involuntary bankruptcy cases never again rose above 2,000. (See Table 4). The decline of involuntary bankruptcy cases appears to have been associated with the decline in business failures. According to Dun and Bradstreet, the number of failures per 10,000 listed concerns averaged 100 per year from 1870 to 1933. From 1934-1988 the failure rate averaged 50 per 10,000 concerns. The failure rate did not rise above 70 per 10,000 listed concerns again until the 1980s. Also, the number of failures, which had averaged over 20,000 a year in the 1920s did not reach 20,000 a year again until the 1980s. The mercantile failures which had so troubled late nineteenth century merchants and manufacturers were much less of a problem after the Great Depression.

Table 6. Business Failures, 1870-1997

Failures per
Year Failures 10,000 Firms
1870 3,546 83
1871 2,915 64
1872 4,069 81
1873 5,183 105
1874 5,830 104
1875 7,740 128
1876 9,092 142
1877 8,872 139
1878 10,478 158
1879 6,658 95
1880 4,735 63
1881 5,582 71
1882 6,738 82
1883 9,184 106
1884 10,968 121
1885 10,637 116
1886 9,834 101
1887 9,634 97
1888 10,679 103
1889 10,882 103
1890 10,907 99
1891 12,273 107
1892 10,344 89
1893 15,242 130
1894 13,885 123
1895 13,197 112
1896 15,088 133
1897 13,351 125
1898 12,186 111
1899 9,337 82
1900 10,774 92
1901 11,002 90
1902 11,615 93
1903 12,069 94
1904 12,199 92
1905 11,520 85
1906 10,682 77
1907 11,725 83
1908 15,690 108
1909 12,924 87
1910 12,652 84
1911 13,441 88
1912 15,452 100
1913 16,037 98
1914 18,280 118
1915 22,156 133
1916 16,993 100
1917 13,855 80
1918 9,982 59
1919 6,451 37
1920 8,881 48
1921 19,652 102
1922 23,676 120
1923 18,718 93
1924 20,615 100
1925 21,214 100
1926 21,773 101
1927 23,146 106
1928 23,842 109
1929 22,909 104
1930 26,355 122
1931 28,285 133
1932 31,822 154
1933 19,859 100
1934 12,091 61
1935 12,244 62
1936 9,607 48
1937 9,490 46
1938 12,836 61
1939 14,768 70
1940 13,619 63
1941 11,848 55
1942 9,405 45
1943 3,221 16
1944 1,222 7
1945 809 4
1946 1,129 5
1947 3,474 14
1948 5,250 20
1949 9,246 34
1950 9,162 34
1951 8,058 31
1952 7,611 29
1953 8,862 33
1954 11,086 42
1955 10,969 42
1956 12,686 48
1957 13,739 52
1958 14,964 56
1959 14,053 52
1960 15,445 57
1961 17,075 64
1962 15,782 61
1963 14,374 56
1964 13,501 53
1965 13,514 53
1966 13,061 52
1967 12,364 49
1968 9,636 39
1969 9,154 37
1970 10,748 44
1971 10,326 42
1972 9,566 38
1973 9,345 36
1974 9,915 38
1975 11,432 43
1976 9,628 35
1977 7,919 28
1978 6,619 24
1979 7,564 28
1980 11,742 42
1981 16,794 61
1982 24,908 88
1983 31,334 110
1984 52,078 107
1985 57,078 115
1986 61,616 120
1987 61,111 102
1988 57,098 98
1989 50,631 65
1990 60,747 74
1991 88,140 107
1992 97,069 110
1993 86,133 96
1994 71,558 86
1995 71,128 82
1996 71,931 86
1997 84,342 89

Source: United States. Historical Statistics of the United States: Bicentennial Edition. 1975; and United States. Statistical Abstract of the United States. Washington D.C.: GPO. Various years.

The Bankruptcy Reform Act of 1978

In contrast to the decline in business failures, personal bankruptcy climbed steadily. Prompted by a rise in personal bankruptcy in the 1960s, Congress initiated an investigation of bankruptcy law that culminated in the Bankruptcy Reform Act of 1978, which replaced the much amended 1898 Bankruptcy Act. The Bankruptcy Reform Act, also known as the Bankruptcy Code or just “the Code”, maintains the menu of options for debtors embodied in the Chandler Act. It provides Chapter 7 liquidation for businesses and individuals, Chapter 11 reorganization, Chapter 13 adjustment of debts for individuals with regular income, and Chapter 12 readjustment for farmers. In 1991, seventy-one percent of all cases were Chapter 7 and twenty-seven percent were Chapter 13. Many of the changes introduced by the Code made bankruptcy, especially Chapter 13, more attractive to debtors. The number of bankruptcy petitions did climb rapidly after the law was enacted. Lobbying by creditor groups and a Supreme Court decision that ruled certain administrative parts of the Act unconstitutional led to the Bankruptcy Amendments and Federal Judgeship Act of 1984. The 1984 amendments attempted to roll back some of the pro-debtor provisions of the Code. Because bankruptcy filings continued their rapid ascent after the 1984, recent studies have tended to look toward changes in other factors, such as consumer finance, to explain the explosion in bankruptcy cases.

Bankruptcy law continues to evolve. To understand the evolution of bankruptcy law is to understand why groups of people came to believe that existing debt collection law was inadequate and to see how those people were able to use courts and legislatures to change the law. In the early nineteenth century demands were largely driven by victims of financial crises. In the late nineteenth century, merchants and manufacturers demanded a law that would facilitate interstate commerce. Unlike its predecessors, the 1898 Bankruptcy Act was not repealed after a few years and over time it gave rise to a group with a vested interest in bankruptcy law, bankruptcy lawyers. Bankruptcy lawyers have played a prominent role in drafting and lobbying for bankruptcy reform since the 1930s. Credit card companies and customers may be expected to play a significant role in changing bankruptcy law in the future.

References

Balleisen, Edward. Navigating Failure: Bankruptcy and Commercial Society in Antebellum America. Chapel Hill: University of North Carolina Press. 2001.

Balleisen, Edward. “Vulture Capitalism in Antebellum America: The 1841 Federal Bankruptcy Act and the Exploitation of Financial Distress.” Business History Review 70, Spring (1996): 473-516

Berglof, Erik and Howard Rosenthal (1999) “The Political Economy of American Bankruptcy: The Evidence from Roll Call Voting, 1800-1978.” working paper, Princeton University.

Coleman, Peter J. Debtors and Creditors in America: Insolvency, Imprisonment for Debt, and Bankruptcy, 1607-1900. Madison: The State Historical Society of Wisconsin. 1974.

Hansen, Bradley. “The Political Economy of Bankruptcy: The 1898 Act to Establish A Uniform System of Bankruptcy.” Essays in Economic and Business History 15, (1997):155-71.

Hansen, Bradley. “Commercial Associations and the Creation of a National Economy: The Demand for Federal Bankruptcy Law.” Business History Review 72, Spring (1998): 86-113.

Hansen, Bradley. “The People’s Welfare and the Origins of Corporate Reorganization: The Wabash Receivership Reconsidered.” Business History Review 74, Autumn (2000): 377-405.

Martin, Albro. “Railroads and the Equity Receivership: An Essay on Institutional Change.” Journal of Economic History 34, (1974): 685-709.

Matthews, Barbara. Forgive Us Our Debts: Bankruptcy And Insolvency in America, 1763-1841. Ph. D. diss. Brown University. 1994.

Moss, David and Gibbs A. Johnson. “The Rise of Consumer Bankruptcy: Evolution, Revolution or Both?” American Bankruptcy Law Journal 73, Spring (1999): 311-51.

Sandage, Scott. Deadbeats, Drunkards and Dreamers: A Cultural History of Failure in America, 1819-1893. Ph. D. diss. Rutgers University. 1995.

Skeel, David A. “An Evolutionary Theory of Corporate Law and Corporate Bankruptcy.” Vanderbilt Law Review, 51 (1998):1325-1398.

Skeel, David A. “The Genius of the 1898 Bankruptcy Act.” Bankruptcy Developments Journal 15, (1999): 321-341.

Skeel, David A. Debt’s Dominion: A History of Bankruptcy Law in America. Princeton: Princeton University Press. 2001.

Sullivan, Theresa, Elizabeth Warren and Jay Westbrook. As We Forgive Our Debtors: Bankruptcy and Consumer Credit in America. Oxford: Oxford University Press. 1989.

Swain, H.H. “Economic Aspects of Railroad Receivership.” Economic Studies 3, (1898): 53-161.

Tufano, Peter. “Business Failure, Judicial Intervention, and Financial Innovation: Restructuring U. S. Railroads in the Nineteenth Century.” Business History Review 71, Spring (1997):1-40.

United States. Report of the Attorney-General. Washington D.C.: GPO. Various years.

United States. Statistical Abstract of the United States. Washington D.C.: GPO. Various years.

United States. Historical Statistics of the United States: Bicentennial Edition. 1975.

Warren, Charles. Bankruptcy In United States History. Cambridge: Harvard University Press. 1935.

Citation: Hansen, Bradley. “Bankruptcy Law in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/bankruptcy-law-in-the-united-states/

The World of Private Banking

Author(s):Cassis, Youssef
Cottrell, Philip
Reviewer(s):Austin, Peter

Published by EH.NET (June 2010)

Youssef Cassis and Philip Cottrell, editors, The World of Private Banking. Aldershot, UK: Ashgate Publishing, 2009.? xxv + 302 pp.? $115 (hardcover), ISBN: 978-1-85928-432-2.

Reviewed for EH.NET by Peter Austin, Department of Interdisciplinary Studies, St. Edward?s University.

?

Occasionally, one has the chance to look simultaneously at something historical and something very much still with us.? This applies to the business of money that, if all goes well, is almost invisible to everyday life.? Today issues of finance are more visible than usual and a realm that prides itself on discretion is under scrutiny. The World of Private Banking represents a time when discretion and reputation were all.? This edited volume contains fifteen chapters that group and connect in a sensible manner, so that reading the whole creates an impression of something greater than the sum of its parts.? It is hugely descriptive though, for the most part, it is not new scholarship.? It covers various aspects of private banking from the late eighteenth century to the First World War, and a bit beyond.? It has an expansiveness that belies the simplicity of its title.

If there is one name associated with private banking, it is Rothschild, and it is with this five-tentacled bank that the collection begins.? In his ?Rise of the Rothschilds,? Niall Ferguson portrays the bank as it was — a sort of multinational.? He is most concerned with origins, the rise of the organization between 1810 and 1836 — that is, before the great banking changes of the mid-nineteenth century.? Derived from his two-volume history, this essay is the collection?s one case study and concerns itself with Rothschild?s size, its bond-dominated business, the partnership structure, the family itself, and reasons for the bank?s success, including its well-known communications network.

In its role as ?The World Pump,? the Rothschild House might today be called a ?non-state actor,? and like today, myths grow up around the inclinations and capabilities of such organizations.? As Ferguson describes, Rothschild was indeed powerful, even in its early decades, based on a number of factors — not least of which was its great geographical reach, and its reliance not on a single market.? Ferguson?s account is florid with personalities and comments about the ingredients of Rothschild?s operations.? One of the most interesting aspects here is Ferguson?s revelation that the House often improvised in its operations, had no systematic accounting, and lost track of considerable amounts of money.? If there is a weakness to this excellent essay, it is that the Ferguson does not choose or prioritize the most important elements of Rothschild?s success.? Was it superior communication, ruthlessness toward rivals, Jewish solidarity??? In the end, for Ferguson, it appears that Rothschild?s performance came from a combination of things, but the bank remained at heart a family concern, from which emanated its intensity and its methods.

In the three decades after 1815, Rothschild?s closest rival was Barings, and John Orbell?s comment on the British house complements Ferguson well.? Rothschild was much larger than Barings, but in his ?Private Banks and International Finance in the Light of the Archives of Baring Brothers,? Orbell highlights the unparalleled range of Barings? financial activities that assured it greater profitability.? Ferguson?s lens focuses squarely and powerfully on one large piece of the private banking puzzle: Rothschild.? By contrast, I believe Orbell does in exemplary fashion what this collection does as a whole so well: reviews and explains the vocabulary, mechanics, and roles associated with the international finance generally; the merchant/private banking enterprise specifically.?

Orbell?s primary assignment here is to articulate the private banks? source of greatest strength and longevity: its international scope.? From Calcutta, Canton and Madrid to Rio de Janeiro, New Orleans and Moscow, Barings was active, and this remained an advantage that private banks maintained over their joint-stock rivals even after these began to eclipse private bankers in home markets after about 1850.? Not only does this chapter locate Barings? activities geographically, it places them in relation to Baring rivals.? Perhaps of all chapters, Orbell?s is unique for its weaving together of merchant banking themes with archival resources.? The gaps are most interesting.? In my own work on Barings in Canada, Massachusetts and England, I can indeed confirm the perplexing absence of material in the Barings record to do with trade finance before 1900.? But in all, the Barings archive is complete and quite well-managed.? It is an orderly record of one of the most important merchant banks to fuel modernization and growth around the world, particularly in the nineteenth century.? Orbell illustrates this process magisterially.

Barings was one of the nineteenth century?s great international financial enterprises.? In the United States, however, it had no peer before 1840, and Edwin Perkins takes a look at Barings? operations there, along with snapshots of five other major banks in the United States in his essay, ?The Anglo-American Houses in the Nineteenth Century.?? A scholar whose early work focused on the House of Brown, Perkins describes the activities of banks in the antebellum United States when it was an emerging market in the way we think of China, Brazil, or India today.? Perkins reminds us that the American market exploded with activity after the Civil War, and that it was European capital flowing increasingly to maturing American financial institutions that helped to settle and develop the enormous home market of the United States — a home market so large and rich that the United States has historically deemphasized exports since independence. The largest theme in Perkins? work is this transition to American financial control on its own soil.

In the 150 years or so before 1914, private banks were exclusive entities.? Today, they are most often found within larger public companies as in the so-called ?private banking? division of a Wells Fargo Bank or even a Charles Schwab.? The salience of Perkins? essay is that we know that the development of a mature American banking system gained traction with the withdrawal of Barings after the financial difficulties of the Andrew Jackson years; that the reasons for withdrawal by Barings were varied, but in part had to do with the disagreeable style, pace, and practices of American finance.? In the portraits of a Brown, a Seligman, a Kuhn Loeb, or a Morgan, this essay previews the rise of raw American financial power released in the 1840s, and subsequently developed.? The fortunes of discrete patrician private banking of the kind described here, particularly British, correspond inversely with the development of the American market and the spread of American democracy and values.? Perkins? essay describes the transition from a time when Anglo-American houses prioritized Britain and British finance to a time when they prioritized business on the western side of the Atlantic.?

International themes continue with Alain Plessis? interesting article on the ?eccentric, quasi-magical world? of the so-called ?Haute Banque? — a very ?small group of powerful houses? in Paris, usually partnerships, international in orientation, whose membership was unofficial, changeable, and difficult to define.? Their mystery was increased because (with the exception of Rothschild) Plessis finds these French banks left few records compared to their British and American counterparts.? Unraveling events in business history is notoriously difficult — in financial history, particularly so.? In contrast to other fields, personalities attracted to commerce and money tend not to be expressive, impressionistic or prone to lengthy description since they tend to see more value in action rather than thinking and writing.? There are exceptions here of course, but the haute banque?s secrecy is in line with Pierpont Morgan?s French aphorism of ?pense moult, parle peu, ?cris rien? (?think a lot, say little, write nothing?).

International operations were the lifeblood of many private banks.? But in the phrase of Alain Plessis, the Parisian haute banque was ?a world open to foreigners? in a manner unlike others in private banking.? Plessis describes cosmopolitan organizations ?incompletely assimilated? into French elite society since they had roots of foreign origin and desired to keep connection with family members outside France.? To be sure, origins and loyalties were by country.? They were also by faith.? He describes wedlock alliances among Christians and among Jews in order to build banking organizations; of major Jewish and Protestant bankers and their children married off to foreign wives and husbands, to people established in France but with foreign origins, often of the same religion as themselves.? Here was international banking with a vengeance.? Here was the source of the Rothschild mystique, a combination of myth and reality mentioned by Ferguson, made more mercurial and (for those so inclined) more mysterious by family members moving around from country to country for intelligence to find new markets and to keep family ties current.?

Plessis on the haute banque introduces the reader to the general phenomenon of religious and ethnic minorities in trade and finance.? Armenians in Turkey, Chinese in Malaya, Greeks in Cairo, and Lebanese in Buenos Aires come to mind.? Here, authors Ginette Kurgan-van Hentenryk and Martin K?rner concentrate on the idea of financial solidarity along religious lines with their chapters on Jewish and Protestant banking.

Kurgan-van Hentenryk broadens aspects of Plessis? essay as she covers the origins of the haute banque at the time of the Bourbon Restoration, a closed circle of twenty banks of Protestant and Jewish financiers that placed loans for Europe?s conservative monarchies after 1815.? But she does so much more.? Here is the story of Jewish private banking and its spread across Europe in the nineteenth century with imminent names like Stern, Bischoffsheim, Bleichr?der, Fould, Oppenheim, Goldschmidt, Cassel, Lazard, Mendelssohn, Seligman, and Rothschild; and later in the United States with Warburg, Schiff, Goldman, and Soros.

Kurgan-van Hentenryk divides Jewish banking activity into four phases: the Hofjuden period, the nineteenth century through the First World War, the interwar/Nazi period, and the post-1945 years.? At all times, she says, Jewish private banking based itself on trade — whether in commodities, capital, or most recently in ideas and services.? It is a fascinating journey in many respects.? The author emphasizes that, particularly before the 1850s, much of the Jewish private banking story took place in Austria and the German states (Vienna, Frankfurt-am-Main, Cologne, Hamburg, Berlin), from which it ramified to other parts of Europe, the United States, and Europe?s colonial possessions.?

It is the story of financial diaspora.? It is also the story of risk-taking in the face of adversity.? Much of Kurgan-van Hentenryk?s essay discusses Jewish participation in projects many non-Jewish private bankers spurned: railroads and early industrial finance.? In this regard, Jewish private bankers, as described, were integral to the early development and promotion of joint-stock banks that culminated in the creation of the Cr?dit Mobilier in France, and the so-called D-banks in Germany.? Kurgan-van Hentenryk illustrates the quick changes to finance during the middle decades of the nineteenth century with the new ?mixed? banking or joint-stock instruments.? Joint-stock banks were, after all, as key to the finance of the 1871 Franco-Prussian War indemnity as private banks (Barings, Rothschild) had been to the Napoleonic indemnity of 1815.? The shift of instruments so profound over just a few decades seems worthy of the phrase ?Big Bang.?

What did not change was a certain anti-Semitism that persisted on the Continent, of course, well into the twentieth century.? It was a prejudice, according to Kurgan-van Hentenryk, not easily mitigated by wealth, accomplishment, or education.? In this regard, she describes a defensive and fascinating kind of clannish behavior, the important role of women for family ties, and a historical pattern of strict endogamy with a goal to deepen networks, and to conserve and increase wealth among families.? Weaving through her account is the presence of the Rothschilds, and it is unclear if the general fortunes of Jewish bankers were hurt or helped by the blossoming of the Rothschild house after 1815.? In this excellent account, the differences, if any, between Sephardic, Ashkenazi, and even Hasidim Jews in their associations, networks, or business successes are also unclear.? After musing on the influence of Jewish financiers in politics, Kurgan-van Hentenryk ends with a question ?what path is next for Jewish private bankers: integration or some sort of innovation?? Whatever the path, she implies adaptability and survival for Jewish bankers, private or not.

Following this account, Martin K?rner turns our attention to Protestant financiers, who he says operated ?from Lisbon to St. Petersburg? by the eighteenth century.? Though a minority, the place of Protestant bankers was historically much less clear for K?rner than Jews are for Kurgan-van Hentenryk, even in the wake of the Reformation.

K?rner describes solidarity among Protestant bankers in the sixteenth century, and the financial networks that started to form — first in several parts of Switzerland, later between various European Protestant groups in the German states and between Huguenot factions in France.? This said, K?rner devotes most space to the growth of Swiss (Calvinist) financial power, particularly in relation to France.? He recounts in highly technical terms the money transfer routes of Protestant bankers who used Geneva as a financial hub, and, like several essays in this collection, K?rner?s account is useful for explaining the mechanics of government loan finance.? But the chapter remains in large measure a description of Swiss Protestant bankers? influence on the French crown.? Starting with the reign of Louis XIII, K?rner depicts the start of a sort of Huguenot haute banque which only grew in influence with the French court as demand for capital increased under the ambitious Louis XIV.? What is fascinating to see here is Catholic monarchs who elevated Protestant bankers to positions of social and political power in Catholic countries in periods of inter-denominational pressure.? This is particularly arresting when the pattern survived in France even after the 1685 revocation of the Edict of Nantes.?

It is indeed interesting to see K?rner explain how Huguenots fled France during her wars of religion and set up shop as merchants and bankers in all the economic centers of Europe.? The difficulty here is that, except for Paris, these other ?economic centers of Europe? are, in the main, given short attention.? And while this essay has clear strengths, it leaves significant areas tantalizingly unaddressed.? Lutherans, Anglicans, Anabaptists, and Methodists are unmentioned, as well as the regions in which they operated.? Did they form networks?? Even if this essay?s focus were only Swiss/Calvinist?French relations, one large weakness would remain.? K?rner does not provide a reason why Catholic monarchs and princes did not employ Catholics bankers.? It is true that Catholics at times accepted Protestants to avoid the services of Jews, as K?rner mentions, but were Catholic bankers inadequate to solve the financial exigencies that befell France, for example, after her Religious Wars?? Were the financial troubles of the pre-Revolutionary decades so unusual that His Most Catholic Majesty Louis XVI could only summon the services of the talented and Protestant Jacques Necker?? K?rner is frustratingly mute.?

If Ferguson (Rothschild), Orbell (Barings), and Perkins (Brown et al.) treat the overarching development of the private bank, the volume?s editors, Youssef Cassis and Philip Cottrell, treat its crisis in two substantial contributions.

In his masterful ?Private Banks and the Onset of the Corporate Economy,? Cassis describes the emergence of a ?new bank? between 1835 and 1865 which he says represented a seismic change in savings and financial participation by the populations of Europe.? This joint-stock, deposit, and investment banking vehicle presaged the onset of unprecedentedly large capital accumulations demanded by a rapidly-industrializing European society in the half-century before the First World War.?

Cassis? essay is a description of slow change across time, not decline and quick fall.? It first reviews what a private bank was — its character, purpose, legal form, and pedigree.? Cassis then describes the great advantage of the private bank in the long term: not the servicing of small and medium-sized businesses in its various domestic locales, but the financing of international trade and the issuance of foreign loans — that is, the exclusive world of the haute banque.? Though a French term, Cassis touches this idea of the haute banque from Paris and Brussels to Berlin and Vienna, and the discussion is a good complement to Plessis? chapter.? However, if there is an emphasis here, it is Britain where one can see the effect of joint-stock banking on private bankers most clearly.? The decline of the private banker, says Cassis, was no steeper than in Britain, ?yet nowhere did private bankers flourish more than in the City of London.?? Here he presents the central paradox of the nineteenth century related to joint-stock ascendancy: while private bankers lost ground as domestic deposit institutions throughout Britain as a whole, they redoubled their commitment to international activities which strengthened financiers in the City, particularly in short-term acceptances.

Philip Cottrell drives home Cassis? case of Britain with his study of the actual mechanisms that changed finance in the City of London: by legislation of 1826, the arrival of limited liability laws and the explosion of domestic limited joint-stock banking in the early 1860s, measures he calls collectively ?London?s First ?Big Bang.?? In addition, Cottrell surveys the competition to private banks, particularly in the international sphere after the growth of joint-stock banks.? Written about so well by Geoffrey Jones, these limited-liability laws followed by the 1862 Companies Act greatly expanded overseas corporate banks and colonial banking, and even spurred the formation of myriad varieties of finance companies.? ?The ?Big Bang? largely sounded the death knell of personal private enterprise within most of London?s financial markets,? writes Cottrell.? ?Private banking persisted in the City, but its days were numbered.?? As Cottrell and Cassis comment, the decline would take time, and David Kynaston also contributes to this discussion of decline (see below).? Cassis and Cottrell (among others in this collection) voice the central irony that private bankers themselves sowed the seeds of their own destruction by sometimes creating joint-stock banks as vehicles to finance industrial projects that, in the end, despite the private bankers? best efforts at control, ultimately replaced them, certainly domestically.

Dieter Ziegler gives us a look at Germany.? Specifically, he asserts that Alexander Gerschenkron?s explanation for the first capital driver of nineteenth-century German industrialization should be private banks, not universal banks.? Here we have a specific substantiation of Kurgan-van Hentenryk?s account (?Jewish Private Banks?) of the origins of the D-banks.? We also have a substantiation of both Jewish and private inputs to railroad and industrial finance before the full onset of joint-stock banking, which was resisted with few exceptions (e.g., Bavaria) throughout the German states, including Prussia.? Inspired by the Credit Mobilier after 1852, nevertheless, Ziegler finds that innovative consortiums assembled by private bankers in the German states and Hapsburg empire ?proved to be the decisive factor for the nascent universal banks? that financed the earliest railroad projects (e.g. 1836, from Vienna to Bochnia in Galicia).

Of course, one of the facts of banking is that joint-stock banks began to trump private bank capital in Europe and the United States after 1850.? Nevertheless, Ziegler is concerned with timing.? Gerschenkron neglected to show that the first successful joint-stock banks were founded by experienced private bankers.? Thus the start of Gerschenkron?s leading sector take-off had a private bank ?spark-plug.?? By the mid-1850s when the first stock credit banks were founded, the basic railway net connecting almost all important Zollverein States was already built.?

Ziegler says that historians should tweak Gerschenkron to include the input of private bankers in the German industrial story.? What of Italy?? Do we need to adjust Gerschenkron?? Luciano Segreto thinks so.? In his ?Private Bankers and Italian Industrialization,? Segreto describes a pre-unification Italy with few consequential financial institutions, a peninsular quilt of regions and cities through which a few private bankers threaded their way often as Protestants or Jews, and who had the strongest financial contacts with interests outside Italy itself.? He finds no competence or inclination to cooperate on anything like an Italian Zollverein.?

At times, Segreto gives the impression of impatience with the historical circumstances he describes before the birth of the Kingdom of Italy.? In the pre-unification period, for example, Segreto describes attempts to form Italian financial organs based on sericulture or shipbuilding in the manner of Belgium?s Soci?t? G?n?rale, or the later Cr?dit Mobilier and Credit Anstalt.? He laments, however, that these enterprises were ?too advanced for the times and above all for the socio-economic context in which [they] operated, [which were before unification] still loath to make a coherent commitment to industrial development.?

Many things changed in the 1870s.? Suddenly, there were national projects and private bankers who had once individually identified only with particular states or with foreign interests were called on to underwrite large projects with a nationwide scope such as railroads — so that bankers from Genoa, Turin, Livorno, and Florence were brought together for a common purpose.? Cooperation also occurred on a regional basis with no banking center more active than Milan, now free from Austrian surveillance.? Segreto points out that, by the 1880s, Milanese commercial banks had joined forces with banks in Turin and Genoa.? The assembly of an Italian credit system led to a national banking system and Segreto parallels the fever of bank establishment with that of antebellum American or Meiji Japan.? In this expansive environment, Segreto implies, private bankers with political ties were active in such sectors as foodstuffs, petroleum, textiles, mining, transport, and real estate but they were, in Segreto?s words, ?flanked by the large commercial banks.??

Unfortunately parts of this essay are quite difficult and vague, making it unclear until the last section what exactly private bankers? roles were in post-unification Italy.? Moreover, Segreto presents mixed banks as a feature of Italy by 1914.? But it is far from clear how we got here.? Whatever the path, however, the destination emerges from Segreto?s essay.? He asserts that private bankers played a particular role after 1890 — something Segreto calls ?functional ?re-specialization.??? After several decades in which ?all operators in the sector? (I assume financial) were kind of industrial-financial generalists, Segreto finds that private bankers switched to the role of facilitator and smooth point of contact between industry and the mixed bank.? He sees? the private banker as the subtle deal-closer in a mixed bank venue, and substantiates his assertion with a persuasive chart that? lists private banker involvement in 31 major industrial enterprises in Italy from 1884 to 1913.? Segreto also reports the decline of private banker ranks in the years after the First World War.? He implies that the less-than-subtle events of the 1920s and 1930s had something to do with this.

J.P. Morgan?s motto may have been to ?write nothing? (ecrit rien).? When carried out, this makes business history research difficult.? However, written archives do exist and readers will find four sections (five authors) on archives of various family businesses and banks in this collection — two British, one Continental, and the Rothschilds that straddled both.? These essays break up The World of Private Banking nicely and provide updates, insights, and personnel connected to research collections.? They also tease researchers with leads to plug holes in the financial history literature.

Except perhaps for John Orbell?s chapter on the well-established Barings, the archive chapters remind the reader that the nature of archives is fluid.? Even with the oft-studied House of Rothschild, Melanie Aspey points out that a large portion of records of the Vienna branch were retrieved from Moscow less than a decade ago.? Aspey?s partner on the Rothschild archive chapter, Victor Gray, corroborates Niall Ferguson?s comment that the papers remain split among the French, Austrian, English, German and the Italian (Naples) branches.? Of these, London is most complete.? But according to Gray, we may never know what we are truly missing since all the Houses of Rothschild were subject to what all private banks are subject: periodic purging by family members.?

Still, millions of letters need cataloguing due to volume, difficulty of categorization, and language — of which six are used in the Rothschild papers.? Language is a barrier also to what Victor Gray sees as the treasure trove of the House: the Judeo-German (Judendeutsch) correspondence in German using Hebrew letters.? These are Rothschild family and business letters used to skirt competitors and to survive as Jews in the police state of Metternich.? As of 1998, only one in seven of these letters was translated.? Additionally, there are hundreds of thousands of international letters from Rothschild correspondents and agents which are starting only now to get scholarly attention, but remain largely unexplored.? John Orbell mentions something similar about Barings? London Wall accounting records which (I can attest) are vast, complete, yet seldom used; and await the eyes a scholar of a certain temperament.?

As Gray and Aspey?s archive discussion complements Ferguson?s Rothschild chapter, so Gabriele Teichmann?s discussion of the papers of Salomon Oppenheim Jr. & Company complements Ziegler?s chapter on private bankers and German industrialization.?? For that matter, one could sensibly pair it with Kurgan-van Hentenryk?s ?Jewish Private Banks.?

Teichmann?s chapter is useful as an advertisement for an archive of intrinsic importance.? Oppenheim was an institution active in the many industrial sectors of a country which, upon unification, proved the most potent in twentieth-century Europe: Germany.? In her discussion of archive resources and the Oppenheim family, Teichmann highlights Cologne, a pivotal city for the history of the industrial Rhineland, and hence for the history of twentieth century Europe.? And it is not without irony that this contributor to German vitality was a Jew.

The last part of Teichmann?s account called ?Social Studies? explores family related topics of the Oppenheims.? This is the exclusive focus of Fiona Maccoll?s ?Banking and Family Archives? in this fourth of four archives chapters.? Here, Maccoll reinforces the idea of family as a cardinal difference between private and other bank types.? Initially, I found Maccoll too prolix with step-by-step family data — that is to say, who said what, to whom, and when.? This task is for the researcher to discover and present.? However, the archivist can be the handmaiden in this endeavor, and Maccoll does this.? Her chapter steers the reader to archival materials that involve people, family, and relationships.? Seemingly banal, the idea of family was one of the distinguishing entrance criteria for private bankers until its twilight in the late twentieth century.? And it is the potential for personal information relevant to operations that is so seductive about the Rothschild Judendeutsch letters, according to Gray and Aspey.? For Maccoll, though, family papers provide data on private banking operations — sometimes indirect, sometimes oblique — that simply does not exist in other banking venues.

Some material in these chapters will not be as useful to those familiar to archives as to those newer to the field.? Still, the range presented here from French (Gray and Aspey) to German (Teichmann) to British (Maccoll and Orbell) has something for everyone, regardless of experience.? Finally, the internet has transformed so many things, and private bank archives are no exception. Gray addresses these issues at some length in regard to the Rothschild archives.

I suppose it could be said that a banker spends half his life making money, the other half giving it away.? Pat Thane touches on the issue of ?giving it away? in her chapter ?Private Banking and Philanthropy: the City of London, 1880s-1920s.?? It is one of the half dozen essays one should read here if pressed for time, not for its superiority per se, but because it bears on a dimension of money-making not touched elsewhere in the collection.? Thane?s chronological focus is tight, her themes limited for the most part to the British Royals and Jewish philanthropy, and her essay is effective as it stands.? Readers may grow impatient with Thane?s dependence on Frank Prochaska?s work for her Edwardian discussion.? And though there is rich coverage of Baron and Baroness de Hirsch, the Bischoffsheims, and Ernest Cassel, some will likely find the account less than satisfying with Schroeder?s the sole House outside the Jewish sphere.? What of Barings, Hambro, and Coutts, or the Quaker legacy?? To say nothing of moving the chronology to the earlier decades of the nineteenth century?? These queries aside, I suspect that the ambition of the essay was deliberately and ruthlessly limited, and, for what it does, it does quite well.? My complaints are meant to inspire others to complete the task that Thane has begun.? She has whetted appetites terrifically.

David Kynaston closes this collection with thoughts on the years in the City after private banking?s ?moment? has passed: its denouement from 1914 to 1986.? He depicts a vocation aware of its decline — a ?closed world, in which family, wealth and social connections counted for more than industry or ability.?? He describes a world anchored to a past ideal, a pre-1914 order of Old Etonians, ill-suited to compete in a time that was starting to see nothing irregular or wrong with the rise of a clerk to bank president.? One example of Kynaston?s idea of nebulous decline? is Edmund de Rothschild?s 1998 memoir, A Gilt-Edged Life.? Here, Kynaston describes a floating comfortable life; a scion of a rather laconic, somewhat frivolous dying breed — reminiscent of the exhaustion of Thomas Mann?s Buddenbrooks — without the animal spirits needed to survive in the rough and tumble world of the later twentieth century.??? Kynaston? illustrates this sense of floating among private banking families with other convincing anecdotes of the 1950s, 1960s, and 1970s.? The second ?Big Bang?? (see Cottrell for the first) made this intangible sense of? drift and decline abruptly concrete for the private City banker in 1986, as the Houses of Lazard, Warburg,? Hill Samuel, and others — once financial whales — became minnows, and new whales arose with names like Citibank, Chase Manhattan, and Banker?s Trust.? My own work on Barings illustrates this well.? Its conservative principles allowed the partnership to weather the Panic of 1837 brilliantly.? Unfortunately, Barings? culture learned the wrong lessons from these successes, and it failed to adapt and innovate in later years.? Indeed, the first time Sir Peter Baring had heard of the ?clerk? Nicholas Leeson, it was too late.? Certainly in its classic form, Kynaston artfully declares the demise of private banking in the City, for only after death can one call for obituaries, which he does.? In the main, the private banks are gone.? Long live the private banks, Kynaston says — in house histories!

One need not read this book chapter by chapter in order.? I recommend the reader start anywhere in the book and fan out.? I have followed this free course in my remarks above.

In closing, one of the virtues of this collection is the overlapping explanations by several authors of the same terms of trade and finance.? Multiple mentions of acceptances, bills of exchange, country banks, merchant banking in different contexts, as well as key dates in the financial history of the period that this volume represents provide a review for the expert, a primer for the novice.

Technically, I appreciated the publisher?s choice to choose footnotes over less convenient endnotes.? Wherever located though, the citations and bibliography present a fantastic tour of current and classical literature on finance and banking with lacunae only of Peter Temin, W.W. Rostow, John McCusker, and Peter Rousseau.

This is not easy material.? However, the level of writing in this volume is high, no doubt made higher by skilled editing.? The uses of this collection are many, not least as a tonic for the current American fashion to present globalization as something new.? On most every page, one finds accounts of men and organizations working in the business of international affairs, indeed global since the start of the nineteenth century.? Part research guide, part family history and part financial/trade primer, this collection is, finally, part museum-piece — for the world of the private banker is largely gone.? Nonetheless, like good museums, this book repays a visit, has much to teach about the present, and presents important things knowledgably and with style.

?

Peter E. Austin is a historian at St. Edward?s University in Austin.? He is the author of Baring Brothers and the Birth of Modern Finance (Pickering & Chatto, 2007).? He is currently at work on a book on the 1960s.

Subject(s):Business History
Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):Europe
Time Period(s):18th Century
19th Century
20th Century: Pre WWII