EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Turnpikes and Toll Roads in Nineteenth-Century America

Daniel B. Klein, Santa Clara University and John Majewski, University of California – Santa Barbara 1

Private turnpikes were business corporations that built and maintained a road for the right to collect fees from travelers.2 Accounts of the nineteenth-century transportation revolution often treat turnpikes as merely a prelude to more important improvements such as canals and railroads. Turnpikes, however, left important social and political imprints on the communities that debated and supported them. Although turnpikes rarely paid dividends or other forms of direct profit, they nevertheless attracted enough capital to expand both the coverage and quality of the U. S. road system. Turnpikes demonstrated how nineteenth-century Americans integrated elements of the modern corporation – with its emphasis on profit-taking residual claimants – with non-pecuniary motivations such as use and esteem.

Private road building came and went in waves throughout the nineteenth century and across the country, with between 2,500 and 3,200 companies successfully financing, building, and operating their toll road. There were three especially important episodes of toll road construction: the turnpike era of the eastern states 1792 to 1845; the plank road boom 1847 to 1853; and the toll road of the far West 1850 to 1902.

The Turnpike Era, 1792–1845

Prior to the 1790s Americans had no direct experience with private turnpikes; roads were built, financed and managed mainly by town governments. Typically, townships compelled a road labor tax. The State of New York, for example, assessed eligible males a minimum of three days of roadwork under penalty of fine of one dollar. The labor requirement could be avoided if the worker paid a fee of 62.5 cents a day. As with public works of any kind, incentives were weak because the chain of activity could not be traced to a residual claimant – that is, private owners who claim the “residuals,” profit or loss. The laborers were brought together in a transitory, disconnected manner. Since overseers and laborers were commonly farmers, too often the crop schedule, rather than road deterioration, dictated the repairs schedule. Except in cases of special appropriations, financing came in dribbles deriving mostly from the fines and commutations of the assessed inhabitants. Commissioners could hardly lay plans for decisive improvements. When a needed connection passed through unsettled lands, it was especially difficult to mobilize labor because assessments could be worked out only in the district in which the laborer resided. Because work areas were divided into districts, as well as into towns, problems arose coordinating the various jurisdictions. Road conditions thus remained inadequate, as New York’s governors often acknowledged publicly (Klein and Majewski 1992, 472-75).

For Americans looking for better connections to markets, the poor state of the road system was a major problem. In 1790, a viable steamboat had not yet been built, canal construction was hard to finance and limited in scope, and the first American railroad would not be completed for another forty years. Better transportation meant, above all, better highways. State and local governments, however, had small bureaucracies and limited budgets which prevented a substantial public sector response. Turnpikes, in essence, were organizational innovations borne out of necessity – “the states admitted that they were unequal to the task and enlisted the aid of private enterprise” (Durrenberger 1931, 37).

America’s very limited and lackluster experience with the publicly operated toll roads of the 1780s hardly portended a future boom in private toll roads, but the success of private toll bridges may have inspired some future turnpike companies. From 1786 to 1798, fifty-nine private toll bridge companies were chartered in the northeast, beginning with Boston’s Charles River Bridge, which brought investors an average annual return of 10.5 percent in its first six years (Davis 1917, II, 188). Private toll bridges operated without many of the regulations that would hamper the private toll roads that soon followed, such as mandatory toll exemptions and conflicts over the location of toll gates. Also, toll bridges, by their very nature, faced little toll evasion, which was a serious problem for toll roads.

The more significant predecessor to America’s private toll road movement was Britain’s success with private toll roads. Beginning in 1663 and peaking from 1750 to 1772, Britain experienced a private turnpike movement large enough to acquire the nickname “turnpike mania” (Pawson 1977, 151). Although the British movement inspired the future American turnpike movement, the institutional differences between the two were substantial. Most important, perhaps, was the difference in their organizational forms. British turnpikes were incorporated as trusts – non-profit organizations financed by bonds – while American turnpikes were stock-financed corporations seemingly organized to pay dividends, though acting within narrow limits determined by the charter. Contrary to modern sensibilities, this difference made the British trusts, which operated under the firm expectation of fulfilling bond obligations, more intent and more successful in garnering residuals. In contrast, for the American turnpikes the hope of dividends was merely a faint hope, and never a legal obligation. Odd as it sounds, the stock-financed “business” corporation was better suited to operating the project as a civic enterprise, paying out returns in use and esteem rather than cash.

The first private turnpike in the United States was chartered by Pennsylvania in 1792 and opened two years later. Spanning 62 miles between Philadelphia and Lancaster, it quickly attracted the attention of merchants in other states, who recognized its potential to direct commerce away from their regions. Soon lawmakers from those states began chartering turnpikes. By 1800, 69 turnpike companies had been chartered throughout the country, especially in Connecticut (23) and New York (13). Over the next decade nearly six times as many turnpikes were incorporated (398). Table 1 shows that in the mid-Atlantic and New England states between 1800 and 1830, turnpike companies accounted for 27 percent of all business incorporations.

Table 1: Turnpikes as a Percentage of All Business Incorporations,
by Special and General Acts, 1800-1830

As shown in Table 2, a wider set of states had incorporated 1562 turnpikes by the end of 1845. Somewhere between 50 to 70 percent of these succeeded in building and operating toll roads. A variety of regulatory and economic conditions – outlined below – account for why a relatively low percentage of chartered turnpikes became going concerns. In New York, for example, tolls could be collected only after turnpikes passed inspections, which were typically conducted after ten miles of roadway had been built. Only 35 to 40 percent of New York turnpike projects – or about 165 companies – reached operational status. In Connecticut, by contrast, where settlement covered the state and turnpikes more often took over existing roadbeds, construction costs were much lower and about 87 percent of the companies reached operation (Taylor 1934, 210).

Table 2: Turnpike Incorporation, 1792-1845

State 1792-1800 1801-10 1811-20 1821-30 1831-40 1841-45 Total
NH 4 45 5 1 4 0 59
VT 9 19 15 7 4 3 57
MA 9 80 8 16 1 1 115
RI 3 13 8 13 3 1 41
CT 23 37 16 24 13 0 113
NY 13 126 133 75 83 27 457
PA 5 39 101 59 101 37 342
NJ 0 22 22 3 3 0 50
VA 0 6 7 8 25 0 46
MD 3 9 33 12 14 7 78
OH 0 2 14 12 114 62 204
Total 69 398 362 230 365 138 1562

Source: Klein and Fielding 1992: 325.

Although the states of Pennsylvania, Virginia and Ohio subsidized privately-operated turnpike companies, most turnpikes were financed solely by private stock subscription and structured to pay dividends. This was a significant achievement, considering the large construction costs (averaging around $1,500 to $2,000 per mile) and the typical length (15 to 40 miles). But the achievement was most striking because, as New England historian Edward Kirkland (1948, 45) put it, “the turnpikes did not make money. As a whole this was true; as a rule it was clear from the beginning.” Organizers and “investors” generally regarded the initial proceeds from sale of stock as a fund from which to build the facility, which would then earn enough in toll receipts to cover operating expenses. One might hope for dividend payments as well, but “it seems to have been generally known long before the rush of construction subsided that turnpike stock was worthless” (Wood 1919, 63).3

Turnpikes promised little in the way of direct dividends and profits, but they offered potentially large indirect benefits. Because turnpikes facilitated movement and trade, nearby merchants, farmers, land owners, and ordinary residents would benefit from a turnpike. Gazetteer Thomas F. Gordon aptly summarized the relationship between these “indirect benefits” and investment in turnpikes: “None have yielded profitable returns to the stockholders, but everyone feels that he has been repaid for his expenditures in the improved value of his lands, and the economy of business” (quoted in Majewski 2000, 49). Gordon’s statement raises an important question. If one could not be excluded from benefiting from a turnpike, and if dividends were not in the offing, what incentive would anyone have to help finance turnpike construction? The turnpike communities faced a serious free-rider problem.

Nevertheless, hundreds of communities overcame the free-rider problem, mostly through a civic-minded culture that encouraged investment for long-term community gain. Alexis de Tocqueville observed that, excepting those of the South, Americans were infused with a spirit of public-mindedness. Their strong sense of community spirit resulted in the funding of schools, libraries, hospitals, churches, canals, dredging companies, wharves, and water companies, as well as turnpikes (Goodrich 1948). Vibrant community and cooperation sprung, according to Tocqueville, from the fertile ground of liberty:

If it is a question of taking a road past his property, [a man] sees at once that this small public matter has a bearing on his greatest private interests, and there is no need to point out to him the close connection between his private profit and the general interest. … Local liberties, then, which induce a great number of citizens to value the affection of their kindred and neighbors, bring men constantly into contact, despite the instincts which separate them, and force them to help one another. … The free institutions of the United States and the political rights enjoyed there provide a thousand continual reminders to every citizen that he lives in society. … Having no particular reason to hate others, since he is neither their slave nor their master, the American’s heart easily inclines toward benevolence. At first it is of necessity that men attend to the public interest, afterward by choice. What had been calculation becomes instinct. By dint of working for the good of his fellow citizens, he in the end acquires a habit and taste for serving them. … I maintain that there is only one effective remedy against the evils which equality may cause, and that is political liberty (Alexis de Tocqueville, 511-13, Lawrence/Mayer edition).

Tocqueville’s testimonial is broad and general, but its accuracy is seen in the archival records and local histories of the turnpike communities. Stockholder’s lists reveal a web of neighbors, kin, and locally prominent figures voluntarily contributing to what they saw as an important community improvement. Appeals made in newspapers, local speeches, town meetings, door-to-door solicitations, correspondence, and negotiations in assembling the route stressed the importance of community improvement rather than dividends.4 Furthermore, many toll road projects involved the effort to build a monument and symbol of the community. Participating in a company by donating cash or giving moral support was a relatively rewarding way of establishing public services; it was pursued at least in part for the sake of community romance and adventure as ends in themselves (Brown 1973, 68). It should be noted that turnpikes were not entirely exceptional enterprises in the early nineteenth century. In many fields, the corporate form had a public-service ethos, aimed not primarily at paying dividends, but at serving the community (Handlin and Handlin 1945, 22, Goodrich 1948, 306, Hurst 1970, 15).

Given the importance of community activism and long-term gains, most “investors” tended to be not outside speculators, but locals positioned to enjoy the turnpikes’ indirect benefits. “But with a few exceptions, the vast majority of the stockholders in turnpike were farmers, land speculators, merchants or individuals and firms interested in commerce” (Durrenberger 1931, 104). A large number of ordinary households held turnpike stock. Pennsylvania compiled the most complete set of investment records, which show that more than 24,000 individuals purchased turnpike or toll bridge stock between 1800 and 1821. The average holding was $250 worth of stock, and the median was less than $150 (Majewski 2001). Such sums indicate that most turnpike investors were wealthier than the average citizen, but hardly part of the urban elite that dominated larger corporations such as the Bank of the United States. County-level studies indicate that most turnpike investment came from farmers and artisans, as opposed to the merchants and professionals more usually associated with early corporations (Majewski 2000, 49-53).

Turnpikes became symbols of civic pride only after enduring a period of substantial controversy. In the 1790s and early 1800s, some Americans feared that turnpikes would become “engrossing monopolists” who would charge travelers exorbitant tolls or abuse eminent domain privileges. Others simply did not want to pay for travel that had formerly been free. To conciliate these different groups, legislators wrote numerous restrictions into turnpike charters. Toll gates, for example, often could be spaced no closer than every five or even ten miles. This regulation enabled some users to travel without encountering a toll gate, and eased the practice of steering horses and the high-mounted vehicles of the day off the main road so as to evade the toll gate, a practice known as “shunpiking.” The charters or general laws also granted numerous exemptions from toll payment. In New York, the exempt included people traveling on family business, those attending or returning from church services and funerals, town meetings, blacksmiths’ shops, those on military duty, and those who lived within one mile of a toll gate. In Massachusetts some of the same trips were exempt and also anyone residing in the town where the gate is placed and anyone “on the common and ordinary business of family concerns” (Laws of Massachusetts 1805, chapter 79, 649). In the face of exemptions and shunpiking, turnpike operators sometimes petitioned authorities for a toll hike, stiffer penalties against shunpikers, or the relocating of the toll gate. The record indicates that petitioning the legislature for such relief was a costly and uncertain affair (Klein and Majewski 1992, 496-98).

In view of the difficult regulatory environment and apparent free-rider problem, the success of early turnpikes in raising money and improving roads was striking. The movement built new roads at rates previously unheard of in America. Table 3 gives ballpark estimates of the cumulative investment in constructing turnpikes up to 1830 in New England and the Middle Atlantic. Repair and maintenance costs are excluded. These construction investment figures are probably too low – they generally exclude, for example, tolls revenue that might have been used to finish construction – but they nevertheless indicate the ability of private initiatives to raise money in an economy in which capital was in short supply. Turnpike companies in these states raised more than $24 million by 1830, an amount equaling 6.15 percent of those states’ 1830 GDP. To put this into comparative perspective, between 1956 and 1995 all levels of government spent $330 billion (in 1996 dollars) in building the interstate highway system, a cumulative total equaling only 4.30 percent of 1996 GDP.

Table 3
Cumulative Turnpike Investment (1800-1830) as percentage of 1830 GNP

State Cumulative Turnpike Investment, 1800-1830 ($) Cumulative Turnpike Investment as Percent of 1830 GDP Cumulative Turnpike Investment per Capita, 1830 ($)
Maine 35,000 0.16 0.09
New Hampshire 575,100 2.11 2.14
Vermont 484,000 3.37 1.72
Massachusetts 4,200,000 7.41 6.88
Rhode Island 140,000 1.54 1.44
Connecticut 1,036,160 4.68 3.48
New Jersey 1,100,000 4.79 3.43
New York 9,000,000 7.06 4.69
Pennsylvania 6,400,000 6.67 4.75
Maryland 1,500,000 3.85 3.36
TOTAL 24,470,260 6.15 4.49
Interstate Highway System, 1956-1996 330 Billion 4.15 (1996 GNP)

Sources: Pennsylvania turnpike investment: Durrenberger 1931: 61); New England turnpike investment: Taylor 1934: 210-11; New York, New Jersey, and Maryland turnpike investment: Fishlow 2000, 549. Only private investment is included. State GDP data come from Bodenhorn 2000: 237. Figures for the cost of the Interstate Highway System can be found at http://www.publicpurpose.com/hwy-is$.htm. Please note that our investment figures generally do not include investment to finish roads by loans or the use of toll revenue. The table therefore underestimates investment in turnpikes.

The organizational advantages of turnpike companies relative to government road not only generated more road mileage, but also higher quality roads (Taylor 1934, 334, Parks 1967, 23, 27). New York state gazetteer Horatio Spafford (1824, 125) wrote that turnpikes have been “an excellent school, in every road district, and people now work the highways to much better advantage than formerly.” Companies worked to intelligently develop roadway to achieve connective communication. The corporate form traversed town and county boundaries, so a single company could bring what would otherwise be separate segments together into a single organization. “Merchants and traders in New York sponsored pikes leading across northern New Jersey in order to tap the Delaware Valley trade which would otherwise have gone to Philadelphia” (Lane 1939, 156).

Turnpike networks became highly organized systems that sought to find the most efficient way of connecting eastern cities with western markets. Decades before the Erie Canal, private individuals realized the natural opening through the Appalachians and planned a system of turnpikes connecting Albany to Syracuse and beyond. Figure 1 shows the principal routes westward from Albany. The upper route begins with the Albany & Schenectady Turnpike, connects to the Mohawk Turnpike, and then the Seneca Turnpike. The lower route begins with the First Great Western Turnpike and then branches at Cherry Valley into the Second and Third Great Western Turnpikes. Corporate papers of these companies reveal that organizers of different companies talked to each other; they were quite capable of coordinating their intentions and planning mutually beneficial activities by voluntary means. When the Erie Canal was completed in 1825 it roughly followed the alignment of the upper route and greatly reduced travel on the competing turnpikes (Baer, Klein, and Majewski 1992).

Figure 1: Turnpike Network in Central New York, 1845
detail

Another excellent example of turnpike integration was the Pittsburgh Pike. The Pennsylvania route consisted of a combination of five turnpike companies, each of which built a road segment connecting Pittsburgh and Harrisburg, where travelers could take another series of turnpikes to Philadelphia. Completed in 1820, the Pittsburgh Pike greatly improved freighting over the rugged Allegheny Mountains. Freight rates between Philadelphia and Pittsburgh were cut in half because wagons increased their capacity, speed, and certainty (Reiser 1951, 76-77). Although the state government invested in the companies that formed the Pittsburgh Pike, records of the two companies for which we have complete investment information shows that private interests contributed 62 percent of the capital (calculated from Majewski 2000: 47-51: Reiser 1951, 76). Residents in numerous communities contributed to individual projects out of their own self interest. Their provincialism nevertheless helped create a coherent and integrated system.

A comparison of the Pittsburgh Pike and the National Road demonstrated the advantages of turnpike corporations over roads financed directly from government sources. Financed by the federal government, the National Road was built between Cumberland, Maryland, and Wheeling, West Virginia, where it was then extended through the Midwest with the hopes of reaching the Mississippi River. Although it never reached the Mississippi, the Federal Government nevertheless spent $6.8 million on the project (Goodrich 1960, 54, 65). The trans-Appalachian section of the National Road competed directly against the Pittsburgh Pike. From the records of two of the five companies that formed the Pittsburgh Pike, we estimate it cost $4,805 per mile to build (Majewski 2000, 47-51, Reiser 1951, 76). The Federal government, on the other hand, spent $13,455 per mile to complete the first 200 miles of the National Road (Fishlow 2000, 549). Besides costing much less, the Pennsylvania Pike was far better in quality. The toll gates along the Pittsburgh Pike provided a steady stream of revenue for repairs. The National Road, on the other hand, depended upon intermittent government outlays for basic maintenance, and the road quickly deteriorated. One army engineer in 1832 found “the road in a shocking condition, and every rod of it will require great repair; some of it now is almost impassable” (quoted in Searight, 60). Historians have found that travelers generally preferred to take the Pittsburgh Pike rather than the National Road.

The Plank Road Boom, 1847–1853

By the 1840s the major turnpikes were increasingly eclipsed by the (often state-subsidized) canals and railroads. Many toll roads reverted to free public use and quickly degenerated into miles of dust, mud and wheel-carved ruts. To link to the new and more powerful modes of communication, well-maintained, short-distance highways were still needed, but because governments became overextended in poor investments in canals, taxpayers were increasingly reluctant to fund internal improvements. Private entrepreneurs found the cost of the technologically most attractive road surfacing material (macadam, a compacted covering of crushed stones) prohibitively expensive at $3,500 per mile. Thus the ongoing need for new feeder roads spurred the search for innovation, and plank roads – toll roads surfaced with wooden planks – seemed to fit the need.

The plank road technique appears to have been introduced into Canada from Russia in 1840. It reached New York a few years later, after the village Salina, near Syracuse, sent civil engineer George Geddes to Toronto to investigate. After two trips Geddes (whose father, James, was an engineer for the Erie and Champlain Canals, and an enthusiastic canal advocate) was convinced of the plank roads’ feasibility and became their great booster. Plank roads, he wrote in Scientific American (Geddes 1850a), could be built at an average cost of $1,500 – although $1,900 would have been more accurate (Majewski, Baer and Klein 1994, 109, fn15). Geddes also published a pamphlet containing an influential, if overly optimistic, estimate that Toronto’s road planks had lasted eight years (Geddes 1850b). Simplicity of design made plank roads even more attractive. Road builders put down two parallel lines of timbers four or five feet apart, which formed the “foundation” of the road. They then laid, at right angles, planks that were about eight feet long and three or four inches thick. Builders used no nails or glue to secure the planks – they were secured only by their own weight – but they did build ditches on each side of the road to insure proper drainage (Klein and Majewski 1994, 42-43).

No less important than plank road economics and technology were the public policy changes that accompanied plank roads. Policymakers, perhaps aware that overly restrictive charters had hamstrung the first turnpike movement, were more permissive in the plank road era. Adjusting for deflation, toll rates were higher, toll gates were separated by shorter distances, and fewer local travelers were exempted from payment of tolls.

Although few today have heard of them, for a short time it seemed that plank roads might be one of the great innovations of the day. In just a few years, more than 1,000 companies built more than 10,000 miles of plank roads nationwide, including more than 3,500 miles in New York (Klein and Majewski 1994, Majewski, Baer, Klein 1993). According to one observer, plank roads, along with canals and railroads, were “the three great inscriptions graven on the earth by the hand of modern science, never to be obliterated, but to grow deeper and deeper” (Bogart 1851).

Except for most of New England, plank roads were chartered throughout the United States, especially in the top lumber-producing states of the Midwest and Mid-Atlantic states, as shown in Table 4.

Table 4: Plank Road Incorporation by State

State Number
New York 335
Pennsylvania 315
Ohio 205
Wisconsin 130
Michigan 122
Illinois 88
North Carolina 54
Missouri 49
New Jersey 25
Georgia 16
Iowa 14
Vermont 14
Maryland 13
Connecticut 7
Massachusetts 1
Rhode Island, Maine 0
Total 1388

Notes: The figure for Ohio is through 1851; Pennsylvania, New Jersey, and Maryland are through 1857. Few plank roads were incorporated after 1857. In western states, some roads were incorporated and built as plank roads, so the 1388 total is not to be taken as a total for the nation. For a complete description of the sources for this table, see Majewski, Baer, & Klein 1993: 110.

New York, the leading lumber state, had both the greatest number of plank road charters (350) and the largest value of lumber production ($13,126,000 in 1849 dollars). Plank roads were especially popular in rural dairy counties, where farmers needed quick and dependable transportation to urban markets (Majewski, Baer and Klein 1993).

The plank road and eastern turnpike episodes shared several features in common. Like the earlier turnpikes, investment in plank road companies came from local landowners, farmers, merchants, and professionals. Stock purchases were motivated less by the prospect of earning dividends than by the convenience and increased trade and development that the roads would bring. To many communities, plank roads held the hope of revitalization and the reversal (or slowing) of relative decline. But those hoping to attain these benefits once again were faced with a free-rider problem. Investors in plank roads, like the investors of the earlier turnpikes, were motivated often by esteem mechanisms – community allegiance and appreciation, reputational incentives, and their own conscience.

Although plank roads were smooth and sturdy, faring better in rain and snow than did dirt and gravel roads, they lasted only four or five years – not the eight to twelve years that promoters had claimed. Thus, the rush of construction ended suddenly by 1853, and by 1865 most companies had either switched to dirt and gravel surfaces or abandoned their road altogether.

Toll Roads in the Far West, 1850 to 1902

Unlike the areas served by the earlier turnpikes and plank roads, Colorado, Nevada, and California in the 1850s and 1860s lacked the settled communities and social networks that induced participation in community enterprise and improvement. Miners and the merchants who served them knew that the mining boom would not continue indefinitely and therefore seldom planted deep roots. Nor were the large farms that later populated California ripe for civic engagement in anywhere near the degree of the small farms of the east. Society in the early years of the West was not one where town meetings, door-to-door solicitations, and newspaper campaigns were likely to rally broad support for a road project. The lack of strong communities also meant that there would be few opponents to pressure the government for toll exemptions and otherwise hamper toll road operations. These conditions ensured that toll roads would tend to be more profit-oriented than the eastern turnpikes and plank road companies. Still, it is not clear whether on the whole the toll roads of the Far West were profitable.

The California toll road era began in 1850 after passage of general laws of incorporation. In 1853 new laws were passed reducing stock subscription requirements from $2,000 per mile to $300 per mile. The 1853 laws also delegated regulatory authority to the county governments. Counties were allowed “to set tolls at rates not to prevent a return of 20 percent,” but they did not interfere with the location of toll roads and usually looked favorably on the toll road companies. After passage of the 1853 laws, the number of toll road incorporations increased dramatically, peaking to nearly 40 new incorporations in 1866 alone. Companies were also created by special acts of the legislature. And sometimes they seemed to have operated without formal incorporation at all. David and Linda Beito (1998, 75, 84) show that in Nevada many entrepreneurs had built and operated toll roads – or other basic infrastructure – before there was a State of Nevada, and some operated for years without any government authority at all.

All told, in the Golden State, approximately 414 toll road companies were initiated,5 resulting in at least 159 companies that successfully built and operated toll roads. Table 5 provides some rough numbers for toll roads in western states. The numbers presented there are minimums. For California and Nevada, the numbers probably only slightly underestimate the true totals; for the other states the figures are quite sketchy and might significantly underestimate true totals. Again, an abundance of testimony indicates that the private road companies were the serious road builders, in terms of quantity and quality (see the ten quotations at Klein and Yin 1996, 689-90).

Table 5: Rough Minimums on Toll Roads in the West

Toll Road
Incorporations
Toll Roads
actually built
California 414 159
Colorado 350 n.a.
Nevada n.a. 117
Texas 50 n.a.
Wyoming 11 n.a.
Oregon 10 n.a.

Sources: For California, Klein and Yin 1996: 681-82; for Nevada, Beito and Beito 1998: 74; for the other states, notes and correspondence in D. Klein’s files.

Table 6 makes an attempt to justify guesses about total number of toll road companies and total toll road miles. The first three numbers in the “Incorporations” column come from Tables 2, 4, and 5. The estimates of success rates and average road length (in the third and fourth columns) are extrapolations from components that have been studied with more care. We have made these estimates conservative, in the sense of avoiding any overstatement of the extent of private road building. The ~ symbol has been used to keep the reader mindful of the fact that many of these numbers are estimates. The numbers in the right hand column have been rounded to the nearest 1000, so as to avoid any impression of accuracy. The “Other” row throws in a line to suggest a minimum to cover all the regions, periods, and road types not covered in Tables 2, 4, and 5. For example, the “Other” row would cover turnpikes in the East, South and Midwest after 1845 (Virginia’s turnpike boom came in the late 1840s and 1850s), and all turnpikes and plank roads in Indiana, whose county-based incorporation, it seems, has never been systematically researched. Ideally, not only would the numbers be more definite and complete, but there would be a weighting by years of operation. The “30,000 – 52,000 miles” should be read as a range for the sum of all the miles operated by any company at any time during the 100+ year period.

Table 6: A Rough Tally of the Private Toll Roads

Toll Road Movements Incorporations % Successful in Building Road Roads Built and Operated Average Road Length Toll Road

Miles Operated

Turnpikes incorporated from 1792 to 1845 1562 ~ 55 % ~ 859 ~ 18 ~ 15,000
Plank Roads incorporated from 1845 to roughly 1860 1388 ~ 65 % ~ 902 ~ 10 ~ 9,000
Toll Roads in the West incorporated from 1850 to roughly 1902 ~ 1127 ~ 40 % ~ 450 ~ 15 ~ 7,000
Other ~ <1000>

[a rough guess]

~ 50 % ~ 500 ~ 16 ~ 8,000
Ranges for

Totals

5,000 – 5,600

incorporations

48 – 60 percent 2,500 – 3,200 roads 12 – 16 miles 30,000 – 52,000

miles

Sources: Those of Tables 2, 4, and 5, plus the research files of the authors.

The End of Toll Roads in the Progressive Period

In 1880 many toll road companies nationwide continued to operate – probably in the range of 400 to 600 companies.6 But by 1920 the private toll road was almost entirely stamped out. From Maine to California, the laws and political attitudes from around 1880 onward moved against the handling of social affairs in ways that seemed informal, inexpert and unsystematic. Progressivism represented a burgeoning of more collectivist ideologies and policy reforms. Many progressive intellectuals took inspiration from European socialist doctrines. Although the politics of restraining corporate evils had a democratic and populist aspect, the bureaucratic spirit was highly managerial and hierarchical, intending to replicate the efficiency of large corporations in the new professional and scientific administration of government (Higgs 1987, 113-116, Ekirch 1967, 171-94).

One might point to the rise of the bicycle and later the automobile, which needed a harder and smoother surface, to explain the growth of America’s road network in the Progressive period. But such demand-side changes do not speak to the issues of road ownership and tolling. Automobiles achieved higher speeds, which made stopping to pay a toll more inconvenient, and that may have reinforced the anti-toll-road company movement that was underway prior to the automobile. Such developments figured into the history of road policy, but they really did not provide a good reason for the policy movement against the toll roads The following words of a county board of supervisors in New York in 1906 indicate a more general ideological bent against toll road companies:

[T]he ownership and operation of this road by a private corporation is contrary to public sentiment in this county, and [the] cause of good roads, which has received so much attention in this state in recent years, requires that this antiquated system should be abolished. … That public opinion throughout the state is strongly in favor of the abolition of toll roads is indicated by the fact that since the passage of the act of 1899, which permits counties to acquire these roads, the boards of supervisors of most of the counties where such roads have existed have availed themselves of its provisions and practically abolished the toll road.

Given such attitudes, it was no wonder that within the U. S. Department of Agricultural, the new Office of Road Inquiry began in 1893 to gather information, conduct research, and “educate” for better roads. The new bureaucracy opposed toll roads, and the Federal Highway Act of 1916 barred the use of tolls on highways receiving federal money (Seely 1987, 15, 79). Anti-toll-road sentiment became state and national policy.

Conclusions and Implications

Throughout the nineteenth-century, the United States was notoriously “land-rich” and “capital poor.” The viability of turnpikes shows how Americans devised institutions – in this case, toll-collecting corporations – that allowed them to invest precious capital in important public projects. What’s more, turnpikes paid little in direct dividends and stock appreciation, yet still attracted investment. Investors, of course, cared for long-term economic development, but that does not account for how turnpike organizers overcame the important public goods problem of buying turnpike stock. Esteem, social pressure, and other non-economic motivations influenced local residents to make investments that they knew would be unprofitable (at least in a direct sense) but would nevertheless help the entire community. On the other hand, the turnpike companies enjoyed the organizational clarity of stock ownership and residual returns. All companies faced the possibility of pressure from investors, who might have wanted to salvage something of their investment. Residual claimancy may have enhanced the viability of many projects, including communitarian projects undertaken primarily for use and esteem.

The combining of these two ingredients – the appeal of use and esteem, and the incentives and proprietary clarity of residual returns – is today severely undermined by the modern legal bifurcation of private initiative into “not-for-profit” and “for-profit” concerns. Not-for-profit corporations can appeal to use and esteem but cannot organize themselves to earn residual returns. For-profit corporations organize themselves for residual returns but cannot very well appeal to use and esteem. As already noted, prior to modern tax law and regulation, the old American toll roads were, relative to the British turnpike trusts, more, not less, use-and-esteem oriented by virtue of being structured to pay dividends rather than interest. Like the eighteenth century British turnpike trusts, the twentieth century American governmental toll projects financed (in part) by privately purchased bonds generally failed, relative to the nineteenth century American company model, to draw on use and esteem motivations.

The turnpike experience of nineteenth-century America suggests that the stock/dividend company can also be a fruitful, efficient, and socially beneficial way to make losses and go on making losses. The success of turnpikes suggests that our modern sensibility of dividing enterprises between profit and non-profit – a distinction embedded in modern tax laws and regulations – unnecessarily impoverishes the imagination of economists and other policy makers. Without such strict legal and institutional bifurcation, our own modern society might better recognize the esteem in trade and the trade in esteem.

References

Baer, Christopher T., Daniel B. Klein, and John Majewski. “From Trunk to Branch: Toll Roads in New York, 1800-1860.” Essays in Economic and Business History XI (1993): 191-209.

Beito, David T., and Linda Royster Beito. “Rival Road Builders: Private Toll Roads in Nevada, 1852-1880.” Nevada Historical Society Quarterly 41 (1998): 71- 91.

Benson, Bruce. “Are Public Goods Really Common Pools? Consideration of the Evolution of Policing and Highways in England.” Economic Inquiry 32 no. 2 (1994).

Bogart, W. H. “First Plank Road.” Hunt’s Merchant Magazine (1851).

Brown, Richard D. “The Emergence of Voluntary Associations in Massachusetts, 1760-1830.” Journal of Voluntary Action Research (1973): 64-73.

Bodenhorn, Howard. A History of Banking in Antebellum America. New York: Cambridge University Press, 2000.

Cage, R. A. “The Lowden Empire: A Case Study of Wagon Roads in Northern California.” The Pacific Historian 28 (1984): 33-48.

Davis, Joseph S. Essays in the Earlier History of American Corporations. Cambridge: Harvard University Press, 1917.

DuBasky, Mayo. The Gist of Mencken: Quotations from America’s Critic. Metuchen, NJ: Scarecrow Press, 1990.

Durrenberger, J.A. Turnpikes: A Study of the Toll Road Movement in the Middle Atlantic States and Maryland. Valdosta, GA.: Southern Stationery and Printing, 1981.

Ekirch, Arthur A., Jr. The Decline of American Liberalism. New York: Atheneum, 1967.

Fishlow, Albert. “Internal Transportation in the Nineteenth and Early Twentieth Centuries.” In The Cambridge Economic History of the United States, Vol. II: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman. New York: Cambridge University Press, 2000.

Geddes, George. Scientific American 5 (April 27, 1850).

Geddes, George. Observations upon Plank Roads. Syracuse: L.W. Hall, 1850.

Goodrich, Carter. “Public Spirit and American Improvements.” Proceedings of the American Philosophical Society, 92 (1948): 305-09.

Goodrich, Carter. Government Promotion of American Canals and Railroads, 1800-1890. New York: Columbia University Press, 1960.

Gunderson, Gerald. “Privatization and the Nineteenth-Century Turnpike.” Cato Journal 9 no. 1 (1989): 191-200.

Higgs, Robert. Crises and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Higgs, Robert. “Regime Uncertainty: Why the Great Depression Lasted So Long and Why Prosperity Resumed after the War.” Independent Review 1 no. 4 (1997): 561-600.

Kaplan, Michael D. “The Toll Road Building Career of Otto Mears, 1881-1887.” Colorado Magazine 52 (1975): 153-70.

Kirkland, Edward C. Men, Cities and Transportation: A Study in New England History, 1820-1900. Cambridge, MA.: Harvard University Press, 1948.

Klein, Daniel. “The Voluntary Provision of Public Goods? The Turnpike Companies of Early America.” Economic Inquiry (1990): 788-812. (Reprinted in The Voluntary City, edited by David Beito, Peter Gordon and Alexander Tabarrok. Ann Arbor: University of Michigan Press, 2002.)

Klein, Daniel B. and Gordon J. Fielding. “Private Toll Roads: Learning from the Nineteenth Century.” Transportation Quarterly 46, no. 3 (1992): 321-41.

Klein, Daniel B. and John Majewski. “Economy, Community and Law: The Turnpike Movement in New York, 1797-1845.” Law & Society Review 26, no. 3 (1992): 469-512.

Klein, Daniel B. and John Majewski. “Plank Road Fever in Antebellum America: New York State Origins.” New York History (1994): 39-65.

Klein, Daniel B. and Chi Yin. “Use, Esteem, and Profit in Voluntary Provision: Toll Roads in California, 1850-1902.” Economic Inquiry (1996): 678-92.

Kresge, David T. and Paul O. Roberts. Techniques of Transport Planning, Volume Two: Systems Analysis and Simulation Models. Washington DC: Brookings Institution, 1971.

Lane, Wheaton J. From Indian Trail to Iron Horse: Travel and Transportation in New Jersey, 1620-1860. Princeton: Princeton University Press, 1939.

Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia before the Civil War. New York: Cambridge University Press, 2000.

Majewski, John. “The Booster Spirit and ‘Mid-Atlantic’ Distinctiveness: Shareholding in Pennsylvania Banking and Transportation Corporations, 1800 to 1840.” Manuscript, Department of History, UC Santa Barbara, 2001.

Majewski, John, Christopher Baer and Daniel B. Klein. “Responding to Relative Decline: The Plank Road Boom of Antebellum New York.” Journal of Economic History 53, no. 1 (1993): 106-122.

Nash, Christopher A. “Integration of Public Transport: An Economic Assessment.” In Bus Deregulation and Privatisation: An International Perspective, edited by J.S. Dodgson and N. Topham. Brookfield, VT: Avebury, 1988

Nash, Gerald D. State Government and Economic Development: A History of Administrative Policies in California, 1849-1933. Berkeley: University of California Press (Institute of Governmental Studies), 1964.

Pawson, Eric. Transport and Economy: The Turnpike Roads of Eighteenth Century Britain. London: Academic Press, 1977.

Peyton, Billy Joe. “Survey and Building the [National] Road.” In The National Road, edited by Karl Raitz. Baltimore: Johns Hopkins University Press, 1996.

Poole, Robert W. “Private Toll Roads.” In Privatizing Transportation Systems, edited by Simon Hakim, Paul Seidenstate, and Gary W. Bowman. Westport, CT: Praeger, 1996

Reiser, Catherine Elizabeth. Pittsburgh’s Commercial Development, 1800-1850. Harrisburg: Pennsylvania Historical and Museum Commission, 1951.

Ridgway, Arthur. “The Mission of Colorado Toll Roads.” Colorado Magazine 9 (1932): 161-169.

Roth, Gabriel. Roads in a Market Economy. Aldershot, England: Avebury Technical, 1996.

Searight, Thomas B. The Old Pike: A History of the National Road. Uniontown, PA: Thomas Searight, 1894.

Seely, Bruce E. Building the American Highway System: Engineers as Policy Makers. Philadelphia: Temple University Press, 1987.

Taylor, George R. The Transportation Revolution, 1815-1860. New York: Rinehart, 1951

Thwaites, Reuben Gold. Early Western Travels, 1746-1846. Cleveland: A. H. Clark, 1907.

U. S. Agency for International Development. “A History of Foreign Assistance.” On the U.S. A.I.D. Website. Posted April 3, 2002. Accessed January 20, 2003.

Wood, Frederick J. The Turnpikes of New England and Evolution of the Same through England, Virginia, and Maryland. Boston: Marshall Jones, 1919.

1 Daniel Klein, Department of Economics, Santa Clara University, Santa Clara, CA, 95053, and Ratio Institute, Stockholm, Sweden; Email: Dklein@scu.edu.

John Majewski, Department of History, University of California, Santa Barbara, 93106; Email: Majewski@history.ucsb.edu.

2 The term “turnpike” comes from Britain, referring to a long staff (or pike) that acted as a swinging barrier or tollgate. In nineteenth century America, “turnpike” specifically means a toll road with a surface of gravel and earth, as opposed to “plank roads” which refer to toll roads surfaced by wooden planks. Later in the century, all such roads were typically just “toll roads.”

3 For a discussion of returns and expectations, see Klein 1990: 791-95.

4 See Klein 1990: 803-808, Klein and Majewski 1994: 56-61.

5 The 414 figure consists of 222 companies organized under the general law, 102 charted by the legislature, and 90 companies that we learned of by county records, local histories, and various other sources.

6 Durrenberger (1931: 164) notes that in 1911 there were 108 turnpikes operating in Pennsylvania alone.

Citation: Klein, Daniel and John Majewski. “Turnpikes and Toll Roads in Nineteenth-Century America”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/turnpikes-and-toll-roads-in-nineteenth-century-america/

History of the U.S. Telegraph Industry

Tomas Nonnenmacher, Allegheny College

Introduction

The electric telegraph was one of the first telecommunications technologies of the industrial age. Its immediate predecessors were homing pigeons, visual networks, the Pony Express, and railroads. By transmitting information quickly over long distances, the telegraph facilitated the growth in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms. This entry focuses on the industrial organization of the telegraph industry from its inception through its demise and the industry’s impact on the American economy.

The Development of the Telegraph

The telegraph was similar to many other inventions of the nineteenth century. It replaced an existing technology, dramatically reduced costs, was monopolized by a single firm, and ultimately was displaced by a newer technology. Like most radical new technologies, the telecommunications revolution of the mid-1800s was not a revolution at all, but rather consisted of many inventions and innovations in both technology and industrial organization. This section is broken into four parts, each reviewing an era of telegraphy: precursors to the electric telegraph, early industrial organization of the industry, Western Union’s dominance, and the decline of the industry.

Precursors to the Electric Telegraph

Webster’s definition of a telegraph is “an apparatus for communicating at a distance by coded signals.” The earliest telegraph systems consisted of smoke signals, drums, and mirrors used to reflect sunlight. In order for these systems to work, both parties (the sender and the receiver) needed a method of interpreting the signals. Henry Wadsworth Longfellow’s poem recounting Paul Revere’s ride (“One if by land, two if by sea, and I on the opposite shore will be”) gives an example of a simple system. The first extensive telegraph network was the visual telegraph. In 1791 the Frenchman Claude Chappe used a visual network (which consisted of a telescope, a clock, a codebook, and black and white panels) to send a message ten miles. He called his invention the télégraphe, or far writer. Chappe refined and expanded his network, and by 1799 his telegraph consisted of a network of towers with mechanical arms spread across France. The position of the arms was interpreted using a codebook with over 8,000 entries.

Technological Advances

Due to technological difficulties, the electric telegraph could not at first compete with the visual telegraph. The basic science of the electric telegraph is to send an electric current through a wire. Breaking the current in a particular pattern denotes letters or phrases. The Morse code, named after Samuel Morse, is still used today. For instance, the code for SOS (… — …) is a well-known call for help. Two elements had to be perfected before an electric telegraph could work: a means of sending the signal (generating and storing electricity) and receiving the signal (recording the breaks in the current).

The science behind the telegraph dates back at least as far as Roger Bacon’s (1220-1292) experiments in magnetism. Numerous small steps in the science of electricity and magnetism followed. Important inventions include those of Giambattista della Porta (1558), William Gilbert (1603), Stephen Gray (1729), William Watson (1747), Pieter van Musschenbroek (1754), Luigi Galvani (1786), Alessandro Giuseppe Antonio Anastasio Volta (1800), André-Marie Ampere (1820), William Sturgeon (1825), and Joseph Henry (1829). A much longer list could be made, but the point is that no single person can be credited with developing the necessary technology of the telegraph.

1830-1866: Development and Consolidation of the Electric Telegraph Industry

In 1832, Samuel Morse returned to the United States from his artistic studies in Europe. While discussing electricity with fellow passengers, Morse conceived of the idea of a single-wire electric telegraph. No one until this time had Morse’s zeal for the applicability of electromagnetism to telecommunications or his conviction of its eventual profitability. Morse obtained a patent in the United States in 1838 but split his patent right to gain the support of influential partners. He obtained a $30,000 grant from Congress in 1843 to build an experimental line between Baltimore and Washington. The first public message over Morse’s line (“What hath God wrought?”) echoed the first message over Chappe’s system (“If you succeed, you will bask in glory”). Both indicated the inventors’ convictions about the importance of their systems.

Morse and His Partners

Morse realized early on that he was incapable of handling the business end of the telegraph and hired Amos Kendall, a former Postmaster General and a member of Andrew Jackson’s “Kitchen Cabinet,” to manage his business affairs. By 1848 Morse had consolidated the partnership to four members. Kendall managed the three-quarters of the patent belonging to Morse, Leonard Gale, and Alfred Vail. Gale and Vail had helped Morse develop the telegraph’s technology. F.O.J. Smith, a former Maine Representative whose help was instrumental in obtaining the government grant, decided to retain direct control of his portion of the patent right. The partnership agreement was vague, and led to discord between Kendall and Smith. Eventually the partners split the patent right geographically. Smith controlled New England, New York, and the upper-Midwest, and Morse controlled the rest of the country.

The availability of financing influenced the early industrial organization of the telegraph. Initially, Morse tried to sell his patent to the government, Kendall, Smith, and several groups of businessmen, but all attempts were unsuccessful. Kendall then attempted to generate interest in building a unified system across the country. This too failed, leaving Kendall to sell the patent right piecemeal to regional interests. These lines covered the most potentially profitable routes, emanating from New York and reaching Washington, Buffalo, Boston and New Orleans. Morse also licensed feeder lines to supply main lines with business.

Rival Patents

Royal House and Alexander Bain introduced rival patents in 1846 and 1849. Entrepreneurs constructed competing lines on the major eastern routes using the new patents. The House device needed a higher quality wire and more insulation as it was a more precise instrument. It had a keyboard at one end and printed out letters at the other. At its peak, it could send messages considerably faster than Morse’s technique. The Bain device was similar to Morse’s, except that instead of creating dots and dashes, it discolored a piece of chemically treated paper by sending an electric current through it. Neither competitor had success initially, leading Kendall to underestimate their eventual impact on the market.

By 1851, ten separate firms ran lines into New York City. There were three competing lines between New York and Philadelphia, three between New York and Boston, and four between New York and Buffalo. In addition, two lines operated between Philadelphia to Pittsburgh, two between Buffalo and Chicago, three between points in the Midwest and New Orleans, and entrepreneurs erected lines between many Midwestern cities. In all, in 1851 the Bureau of the Census reported 75 companies with 21,147 miles of wire.

Multilateral Oligopolies

The telegraph markets in 1850 were multilateral oligopolies. The term “multilateral” means that the production process extended in several directions. Oligopolies are markets in which a small number of firms strategically interact. Telegraph firms competed against rivals on the same route, but sought alliances with firms with which they connected. For example, four firms (New York, Albany & Buffalo; New York State Printing; Merchants’ State; and New York and Erie) competed on the route between New York City and Buffalo. Rates fell dramatically (by more than 50%) as new firms entered, so this market was quite competitive for a while. But each of these firms sought to create an alliance with connecting firms, such as those with lines from New York City to Boston or Washington. Increased business from exchanging messages meant increased profitability.

Mistransmission Problems

Quality competition was also fierce, with the line that erected the best infrastructure and supplied the fastest service usually dominating other, less capable firms. Messages could easily be garbled, and given the predominately business-related use of the telegraph, a garbled message was often worse than no message at all. A message sent from Boston to St. Louis could have traveled over the lines of five firms. Due to the complexity of the production process, messages were also often lost, with no firm taking responsibility for the mistransmission. This lack of responsibility gave firms an incentive to provide a lower quality service compared to an integrated network. These issues ultimately contributed to the consolidation of the industry.

Horizontal and System Integration

Horizontal integration-integration between two competing firms-and system integration-integration between two connecting firms-occurred in the telegraph industry during different periods. System integration occurred between 1846 and 1852, as main lines acquired most of the feeder lines in the country. In 1852 the Supreme Court declared the Bain telegraph an infringement on Morse’s patent, and Bain lines merged with Morse lines across the country. Between 1853 and 1857 regional monopolies formed and signed the “Treaty of Six Nations,” a pooling agreement between the six largest regional firms. During this phase the industry experienced both horizontal and system integration. By the end of the period, most remaining firms were regional monopolists, controlled several large cities and owned both the House and the Morse patents. Figure 1 shows the locations of these firms.

Figure 1: Treaty of Six Nations

Source: Thompson, p. 315

The final phase of integration occurred between 1857 and 1866. In this period the pool members consolidated into a national monopoly. By 1864 only Western Union and the American Telegraph Company remained of the “Six Nations.” The United States Telegraph Company entered the field by consolidating smaller, independent firms in the early 1860s, and operated in the territory of both the American Telegraph Company and Western Union. By 1866 Western Union absorbed its last two competitors and reached its position of market dominance.

Efficiency versus Market Power

Horizontal and system integration had two causes: efficiency and market power. Horizontal integration created economies of scale that could be realized from placing all of the wires between two cities on the same route or all the offices in a city in the same location. This consolidation reduced the cost of maintaining multiple lines. The reduction in competition due to horizontal integration also allowed firms to charge a higher price and earn monopoly profits. The efficiency gain from system integration was better control of messages travelling long distances. With responsibility for the message placed clearly in the hands of one firm, messages were transmitted with more care. System integration also created monopoly power, since to compete with a large incumbent system, a new entrant would have to also create a large infrastructure.

1866-1900: Western Union’s Dominance

The period from 1866 through the turn of the century was the apex of Western Union’s power. Yearly messages sent over its lines increased from 5.8 million in 1867 to 63.2 million in 1900. Over the same period, transmission rates fell from an average of $1.09 to 30 cents per message. Even with these lower prices, roughly 30 to 40 cents of every dollar of revenue were net profit for the company. Western Union faced three threats during this period: increased government regulation, new entrants into the field of telegraphy, and new competition from the telephone. The last two were the most important to the company’s future profitability.

Western Union Fends off Regulation

Western Union was the first nationwide industrial monopoly, with over 90% of the market share and dominance in every state. The states and the federal government responded to this market power. State regulation was largely futile given the interstate character of the industry. On the federal level, bills were introduced in almost every session of Congress calling for either regulation of or government entry into the industry. Western Union’s lobby was able to block almost any legislation. The few regulations that were passed either helped Western Union maintain its control over the market or were never enforced.

Western Union’s Smaller Rivals

Western Union’s first rival was the Atlantic and Pacific Telegraph Company, a conglomeration of new and merged lines created by Jay Gould in 1874. Gould sought to wrest control of Western Union from the Vanderbilts, and he succeeded in 1881 when the two firms merged. A more permanent rival appeared in the 1880s in the form of the Postal Telegraph Company. John Mackay, who had already made a fortune at the Comstock Lode, headed this firm. Mackay did what many of his telegraph predecessors did in the 1850s: create a network by buying out existing bankrupt firms and merging them into a network with large enough economies of scale to compete with Western Union. Postal never challenged Western Union’s market dominance, but did control over 10-20% of the market at various times.

The Threat from the Telephone

Western Union’s greatest threat came from a new technology, the telephone. Alexander Graham Bell patented the telephone in 1876, initially referring to it as a “talking telegraph.” Bell offered Western Union the patent for the telephone for $100,000, but the company declined to purchase it. Western Union could have easily gained control of AT&T in the 1890s, but management decided that higher dividends were more important than expansion. The telephone was used in the 1880s only for local calling, but with the development in the 1890s of “long lines,” the telephone offered increased competition to the telegraph. In 1900, local calls accounted for 97% of the telephone’s business, and it was not until the twentieth century that the telephone fully displaced the telegraph.

1900-1988: Increased Competition and Decline

The twentieth century saw the continued rise of the telephone and decline of the telegraph. Telegraphy continued to have a niche in inexpensive long-distance and international communication, including teletypewriters, Telex, and stock ticker. As shown in Table 1, after 1900, the rise in telegraph traffic slowed, and after 1930, the number of messages sent began to decline.

Table 1: Messages Handled by the Telegraph Network: 1870-1970

Date Messages Handled Date Messages Handled
1870 9,158,000 1930 211,971,000
1880 29,216,000 1940 191,645,000
1890 55,879,000 1945 236,169,000
1900 63,168,000 1950 178,904,000
1910 75,135,000 1960 124,319,000
1920 155,884,000 1970 69,679,000

Source: Historical Statistics.
Notes: Western Union messages 1870-1910; all telegraph companies, 1920-1970.

AT&T Obtains Western Union, Then Gives It Up

In 1909, AT&T gained control of Western Union by purchasing 30% of its stock. In many ways, the companies were heading in opposite directions. AT&T was expanding rapidly, while Western Union was content to reap handsome profits and issue large dividends but not reinvest in itself. Under AT&T’s ownership, Western Union was revitalized, but the two companies separated in 1913, succumbing to pressure from the Department of Justice. In 1911, the Department of Justice successfully used the Sherman Antitrust Act to force a breakup of Standard Oil. This success made the threat of antitrust action against AT&T very credible. Both Postal Telegraph and the independent telephone companies wishing to interconnect with AT&T lobbied for government regulation. In order to forestall any such government action, AT&T issued the “Kingsbury Commitment,” a unilateral commitment to divest itself of Western Union and allow independent telephone firms to interconnect.

Decline of the Telegraph

The telegraph flourished in the 1920s, but the Great Depression hit the industry hard, and it never recovered to its previous position. AT&T introduced the teletypewriter exchange service in 1931. The teletypewriter and the Telex allowed customers to install a machine on their premises that would send and receive messages directly. In 1938, AT&T had 18%, Postal 15% and Western Union 64% of telegraph traffic. In 1945, 236 million domestic messages were sent, generating $182 million in revenues. This was the most messages sent in a year over the telegraph network in the United States. By that time, Western Union had incorporated over 540 telegraph and cable companies into its system. The last important merger was between Western Union and Postal, which occurred in 1945. This final merger was not enough to stop the continuing rise of the telephone or the telegraph’s decline. Already in 1945, AT&T’s revenues and transmission dwarfed those of Western Union. AT&T made $1.9 billion in yearly revenues by transmitting 89.4 million local phone calls and 4.9 million toll calls daily. Table 2 shows the increasing competitiveness of telephone rates with telegraph rates.

Table 2: Telegraph and Telephone Rates from New York City to Chicago: 1850-1970

Date Telegraph* Telephone**
1850 $1.55
1870 1.00
1890 .40
1902 5.45
1919 .60 4.65
1950 .75 1.50
1960 1.45 1.45
1970 2.25 1.05

Source: Historical Statistics.
Notes: * Beginning 1960, for 15 word message. Prior to 1960 for 10 word message. ** Rates for station-to station, daytime, 3-minute call

The Effects of the Telegraph

The travel time from New York City to Cleveland in 1800 was two weeks, with another four weeks necessary to reach Chicago. By 1830, those travel times had fallen in half, and by 1860 it took only two days to reach Chicago from New York City. However, by use of the telegraph, news could travel between those two cities almost instantaneously. This section examines three instances where the telegraph affected economic growth: railroads, high throughput firms, and financial markets.

Telegraphs and Railroads

The telegraph and the railroad were natural partners in commerce. The telegraph needed the right of way that the railroads provided and the railroads needed the telegraph to coordinate the arrival and departure of trains. These synergies were not immediately recognized. Only in 1851 did railways start to use telegraphy. Prior to that, telegraph wires strung along the tracks were seen as a nuisance, occasionally sagging and causing accidents and even fatalities.

The greatest savings of the telegraph were from the continued use of single-tracked railroad lines. Prior to 1851, the U.S. system was single-tracked, and trains ran on a time-interval system. Two types of accidents could occur. Trains running in opposite directions could run into one another, as could trains running in the same direction. The potential for accidents required that railroad managers be very careful in dispatching trains. One way to reduce the number of accidents would have been to double-track the system. A second, better, way was to use the telegraph.

Double-tracking was a good alternative, but not perfect. Double-tracked lines would eliminate head-on collisions, but not same direction ones. This would still need to be done using a timing system, i.e. requiring a time interval between departing trains. Accidents were still possible using this system. By using the telegraph, station managers knew exactly what trains were on the tracks under their supervision. Double-tracking the U.S. rail system in 1893 has been estimated to cost $957 million. Western Union’s book capitalization was $123 million in 1893, making this seem like a good investment. Of course, the railroads could have used a system like Chappe’s visual telegraph to coordinate traffic, but such a system would have been less reliable and would not have been able to handle the same volume of traffic.

Telegraph and Perishable Products Industries

Other industries that had a high inventory turnover also benefited from the telegraph. Of particular importance were industries in which the product was perishable. These industries included meatpacking and the distribution of fruits and vegetables. The growth of both of these industries was facilitated by the introduction of the refrigerated car in 1874. The telegraph was required for the exact control of shipments. For instance, refrigeration and the telegraph allowed for the slaughter and disassembly of livestock in the giant stockyards of Chicago, Kansas City, St. Louis and Omaha. Beef would then be shipped east at a cost of 50% that of shipping the live cattle. The centralization of the stockyards also created tremendous amounts of by-products that could be processed into glue, tallow, dye, fertilizer, feed, brushes, false teeth, gelatin, oleomargarine, and many other useful products.

Telegraph and Financial Markets

The telegraph undoubtedly had a major impact on the structure of financial markets in the United States. New York became the financial center of the country, setting prices for a variety of commodities and financial instruments. Among these were beef, corn, wheat, stocks and bonds. As the telegraph spread, so too did the centralization of prices. For instance, in 1846, wheat and corn prices in Buffalo lagged four days behind those in New York City. In 1848, the two markets were linked telegraphically and prices were set simultaneously.

The centralization of stock prices helped make New York the financial capital of the United States. Over the course of the nineteenth century, hundreds of exchanges appeared and then disappeared across the country. Few of them remained, with only those in New York, Philadelphia, Boston, Chicago and San Francisco achieving any permanence. By 1910, 90 percent of all bond and two-thirds of all stock trades occurred on the New York Stock Exchange.

Centralization of the market created much more liquidity for stockholders. As the number of potential traders increased, so too did the ability to find a buyer or seller of a financial instrument. This increase in liquidity may have led to an increase in the total amount invested in the market, therefore leading to higher levels of investment and economic growth. Centralization may also have led to the development of certain financial institutions that could not have been developed otherwise. Although difficult to quantify, these aspects of centralization certainly had a positive effect on economic growth.

In some respects, we may tend to overestimate the telegraph’s influence on the economy. The rapid distribution of information may have had a collective action problem associated with it. If no one else in Buffalo has a piece of information, such as the change in the price of wheat in New York City, then there is a large private incentive to discover that piece of information quickly. But once everyone has the information, no one made better off. A great deal of effort may have been spent on an endeavor that, from society’s perspective, did not increase overall efficiency. The centralization in New York also increased the gains from other wealth-neutral or wealth-reducing activities, such as speculation and market manipulation. Higher volumes of trading increased the payoff from the successful manipulation of a market, yet did not increase society’s wealth.

Conclusion

The telegraph accelerated the speed of business transactions during the late nineteenth century and contributed to the industrialization of the United States. Like most industries, it faced new competition that ultimately proved its downfall. The telephone was easier and faster to use, and the telegraph ultimately lost its cost-advantages. In 1988, Western Union divested itself of its telegraph infrastructure and focused on financial services, such as money orders. A Western Union telegram is still available, currently costing $9.95 for 250 words.

Telegraph Timeline

1837 Cooke and Wheatstone patent telegraph in England.
1838 Morse’s Electro-Magnetic Telegraph patent approved.
1843 First message sent between Washington and Baltimore.
1846 First commercial telegraph line completed. The Magnetic Telegraph Company’s lines ran from New York to Washington.
House’s Printing Telegraph patent approved.
1848 Associated Press formed to pool telegraph traffic.
1849 Bain’s Electro-Chemical patent approved.
1851 Hiram Sibley and associates incorporate New York and Mississippi Valley Printing Telegraph Company. Later became Western Union.
1851 Telegraph first used to coordinate train departures.
1857 Treaty of Six Nations is signed, creating a national cartel
1859 First transatlantic cable is laid from Newfoundland to Valentia, Ireland. Fails after 23 days, having been used to send a total of 4,359 words. Total cost of laying the line was $1.2 million.
1861 First Transcontinental telegraph completed.
1866 First successful transatlantic telegraph laid
Western Union merges with major remaining rivals.
1867 Stock ticker service inaugurated.
1870 Western Union introduces the money order service.
1876 Alexander Graham Bell patents the telephone.
1908 AT&T gains control of Western Union. Divests itself of Western Union in 1913.
1924 AT&T offers Teletype system.
1926 Inauguration of the direct stock ticker circuit from New York to San Francisco.
1930 High-speed tickers can print 500 words per minute.
1945 Western Union and Postal Telegraph Company merge.
1962 Western Union offers Telex for international teleprinting.
1974 Western Union places Westar satellite in operation.
1988 Western Union Telegraph Company reorganized as Western Union Corporation. The telecommunications assets were divested and Western Union focuses on money transfers and loan services.

References

Blondheim, Menahem. News over the Wires. Cambridge: Harvard University Press, 1994.

Brock, Gerald. The Telecommunications Industry. Cambridge: Harvard University Press, 1981.

DuBoff, Richard. “Business Demand and the Development of the Telegraph in the United States, 1844-1860.” Business History Review 54 (1980): 461-477.

Field, Alexander. “The Telegraphic Transmission of Financial Asset Prices and Orders to Trade: Implications for Economic Growth, Trading Volume, and Securities Market Regulation.” Research in Economic History 18 (1998).

Field, Alexander. “French Optical Telegraphy, 1793-1855: Hardware, Software, Administration.” Technology and Culture 35 (1994): 315-47.

Field, Alexander. “The Magnetic Telegraph, Price and Quantity Data, and the New Management of Capital.” Journal of Economic History 52 (1992): 401-13.

Gabler, Edwin. The American Telegrapher: A Social History 1860-1900. New Brunswick: Rutgers University Press, 1988.

Goldin, H. H. “Governmental Policy and the Domestic Telegraph Industry.” Journal of Economic History 7 (1947): 53-68.

Israel, Paul. From Machine Shop to Industrial Laboratory. Baltimore: Johns Hopkins, 1992.

Lefferts, Marshall. “The Electric Telegraph: its Influence and Geographical Distribution.” American Geographical and Statistical Society Bulletin, II (1857).

Nonnenmacher, Tomas. “State Promotion and Regulation of the Telegraph Industry, 1845-1860.” Journal of Economic History 61 (2001).

Oslin, George. The Story of Telecommunications. Macon: Mercer University Press, 1992.

Reid, James. The Telegraph in America. New York: Polhemus, 1886.

Thompson, Robert. Wiring a Continent, Princeton: Princeton University Press, 1947.

U.S. Bureau of the Census. Report of the Superintendent of the Census for December 1, 1852, Washington: Robert Armstrong, 1853.

U.S. Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970: Bicentennial Edition, Washington: GPO, 1976.

Yates, JoAnne. “The Telegraph’s Effect on Nineteenth Century Markets and Firms.” Business and Economic History 15 (1986):149-63.

Citation: Nonnenmacher, Tomas. “History of the U.S. Telegraph Industry”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-the-u-s-telegraph-industry/

The History of the International Tea Market, 1850-1945

Bishnupriya Gupta, University of Warwick

Demand for Tea

“Tea is better than wine for it leadeth not to intoxication, neither does it cause a man to say foolish things and repent there of in his sober moments. It is better than water for it does not carry disease; neither does it act like poison as water does when it contains foul and rotten matter.”

This ancient saying from China gained widespread acceptance in Europe during the course of the eighteenth century. Tea displaced beer in Britain and in the Netherlands. In the fight against alcoholism, the temperance movement of the nineteenth century recommended tea as an alternative. Evidence based on contemporary accounts suggests that a tradesman’s family in 1749 in Britain spent three shillings a week for bread and four shillings on tea and sugar. But tea was still too expensive to become common man’s drink. It was only in the nineteenth century that tea became a common beverage for British households. Per capita consumption per year increased from 1.1 pounds in 1820 to 5.9 pounds in 1900 and 9.6 pounds in 1931. By this time the British market had reached a saturation point. In the United States and continental Europe, advertising campaigns encouraging coffee drinkers to switch to tea had limited success (see Table 1).

Table 1: Consumption of Tea: International Market Share

Year Share in World Consumption (%)
United Kingdom Rest of Europe Russia/USSR North America
(including West Indies)
Major Producing Countries
1910 39.2 4.2 21.0 18.3 4.4
1920 56.4 6.9 Not Available 18.1 6.6
1928 48.4 6.7 7.1 14.3 4.1
1936 53.5 6.3 3.1 14.2 9.3

Source: International Tea Committee, Bulletin of Statistics, 1946.

A small proportion of a household’s total budget is spent on tea. At lower levels of income tea consumption responds to changes in income. The income elasticity of demand (i.e. the percentage change in consumption due to a one percent change in income) for tea in India in the 1950s was estimated to be 1.1. But at higher levels of income, the income elasticity of demand for tea tends to be low. For the UK in the interwar years, Richard Stone estimated the price elasticity of demand (i.e. the percentage change in consumption due to a one percent in price) to be -0.32, while the income elasticity was only 0.04. These figures suggest that the market in developed countries would not expand significantly with rising incomes. Furthermore, a decline in price would not have a large effect on the quantity demanded.

In the producer countries, which were less economically developed, the domestic market showed significant expansion from the 1930s onwards. The Indian market increased from 10 million pounds in 1905 to 18 million pounds in 1910, but was only a small proportion of the British consumption of 287 million pounds in 1910. However, there was little effort to expand the domestic market in India until the 1930s. India had a large population and the potential of a large market. As British demand stagnated large sums were spent on advertising campaigns in India. The industry set up demonstrations in tea-making and sold cups of tea at railway stations and local fairs. The Indian market in the 1920s had increased by 15 million pounds to 50 million pounds. Consumption doubled in the 1930s.

Supply of Tea

China had been the major supplier of tea to Britain. Tea was cultivated in small plots of land by peasant farmers, whose output proved inadequate to meet the surge in demand. The slow increase in production together with the political inwardness in China after 1840 led to the search for alternative production centers. Plantations appeared to be an attractive alternative. British experiments with the tea plant in south Asia were successful and led to the development of plantations in Eastern India and in Ceylon (Sir Lanka) from the middle of the nineteenth century. The tea companies attracted investment from Britain and were managed by British agents. By 1860, more than fifty companies were producing tea in Eastern India. Tea companies in India and Ceylon were registered either in London or in Calcutta and Colombo and run by British agents on the basis of long-term agency contracts. The British agents had local counterparts who were responsible for the day-to-day functioning. A typical managing agent owned shares in several companies and was responsible for their management. Consequently, despite the presence of a few hundred companies in India and Ceylon by the early twentieth century, decision making in the industry was in the hands of a few British agents. In 1879, over 70 percent of the teas sold in London were from China. By 1900 China’s share had dramatically declined to 10 percent and the black teas from India and Ceylon constituted the bulk of the market. Table 2 shows market share of the main exporting countries between 1928 and 1940.

Table 2: Production of Tea: International Market Share

Year Share in World Exports (%)
India Ceylon Java and Sumatra
1928 39.0 26.0 16.7
1936 37.1 25.8 18.1
1940 37.5 26.0 18.4

Source: International Tea Committee, Bulletin of Statistics, 1946.

There are two types of tea. India and Ceylon produced black tea. China produced both black tea and green tea. Both are produced from the same plant. Leaves are steamed and dried to produce green tea. Black tea undergoes fermentation and further oxidization. Tea prices were determined at auctions, London was an important center. Calcutta, Colombo and Amsterdam were the other main centers. Prices depend on the quality of tea. Regional differences in soil, climate and elevation account for differences in quality. The slopes of the Himalayas in and around Darjeeling and the highland areas on the island of Ceylon produce some of the finest teas in the world and command high prices. However, average tea prices depend on the supply costs of common teas. The tea crop is harvested all through the year in the tropical areas, Ceylon and Java. In Eastern India the onset of the winter brings an end to harvesting. The output of tea consists of leaves plucked from the tea bush. Fine plucking reduces quantity, but improves quality while coarse plucking increases output at the cost of quality. In the short run output can be varied by regulating plucking. Increase in output in the long run takes place through increase in cultivation. The tea plants take six to seven years to mature. When prices are high there is an incentive to pluck coarse to increase output in the short-run. This disproportionately increases the quantity of common teas leading to a sharp decline in the average price.

Fluctuations in Prices

In the first half of the twentieth century, the tea industry saw wide fluctuations in prices. During the First World War, the British government undertook purchases of tea to avoid a shortage in supply. This guaranteed a market for the producers. The boom in prices in the early 1920s encouraged an increase in acreage under tea not just in India and Ceylon, but also in Java and Sumatra, territories in the Dutch East Indies. In India, it encouraged planting and establishment of new plantations in the hills of the southern India. The increased acreage was followed by an increase in output with a lag of a few years. As in many other agricultural commodities the international market showed signs of excess supply towards the end of the 1920s and stocks accumulated. The collapse of tea prices in 1929 was not simply a result of decline in demand with the onset of the Depression in 1929; high supply had become a feature of the industry following the post war expansion in acreage.

Figure 1: Average Tea Prices

Source: International Tea Committee, Bulletin of Statistics, 1946.

The Tea Cartel

During this period price support schemes were put in place for several agricultural commodities by forming collusive agreements or cartels. As primary products have low price elasticities of demand, output restriction increases the profits of the producers and is in their collective interest. Early attempts at collusion in tea had not been successful, but as prices tumbled, the tea producers’ associations in the three major producing countries set up the International Tea Agreement in 1930. The Tea Associations in India, Ceylon and the Dutch East Indies agreed to reduce output to prevent a further fall in prices. This was a voluntary agreement, where each tea company belonging to the Tea Associations in the producer countries signed up to cut back output. There were many firms in the industry. However as the firms were managed by a few agents who made decisions about how much to produce, effective firm size was larger and increased the viability of a collusive agreement. Each producer in a cartel has an incentive to cheat and free ride on the compliance of other firms. But when firms face a threat that the agreement will be abandoned and prices will decline if participants do not comply, the agreement can be sustained. Economic theory predicts that collusion can be sustained by price wars — any sign of noncompliance such as falling prices, leads every firm to abandon the agreement and increase output bringing about a further fall in prices. Collusion can be sustained more easily in markets were output is produced by few firms.

The International Tea Agreement was abandoned in 1931 and 1932. When the figures were added up it emerged that the promised reduction by Java and Sumatra in the Dutch East Indies had not been made. Any reduction made by the European Estates had been counterbalanced by increased production on the part of the native producers. The agreement fell apart. The Tea Associations in India and Ceylon blamed Java and Sumatra for the failure to restrict output in accordance with the scheme of 1930. The conflict of interest between large producing firms and smaller producers in terms of what each can gain from the cartel prevented a continuation of the collusive arrangement. Producers in the country with the smallest market share were not keen to be a part of the arrangement. But India and Ceylon continued to negotiate for an agreement rather have a price war. Negotiations and bargaining were much more important in collusion in the tea market. Contrary to what theory suggests, there is no evidence of a price war. As prices declined further, producers in Java and Sumatra were more willing to be a part of such an agreement. A second International Tea Agreement was signed in 1933. All the participating countries faced a reduction in exports by 15 percent of the maximum attained in any of the years 1929-32. Export quotas were assigned to individual firms, but the quota could be traded. The agreement covered a period of five years. Legislation was adopted in the participating countries, which made the export quota legally binding and limited expansion in acreage up to a maximum of 0.5 percent per year. The International Tea Agreement of 1933 was a successful case cartelization. The agreement lasted right up to the Second World War when the conditions in the market changed. The agreement led to an immediate upward movement in prices (See Figure 1). As Table 3 shows, most firms in India and Ceylon reduced output in response to the agreement. There is no doubt that the success of the agreement depended on the legislation passed in the producing countries in 1933. There had been no legal backing in the case of the previous agreement. It froze the relative market share of the producers and prevented new firms from entering the tea market. The International Tea Committee appeared to have a clearly thought out strategy and seemed to act with considerable foresight. Export of tea seeds from the three participating countries was prohibited. It was only when Kenya, Uganda, Tanganyika and Nyasaland agreed to limit new planting that the export restrictions were eased.

The Economist commented in August 1933:

“Producers of commodities like wheat and sugar may envy the facility with which the tea growing industry obtained a 30 percent rise in average tea prices and a 90 percent enhancement of tea share values — all within the space of a little more than six months.”

Table 3
Compliance to the Tea Agreement
Percent of Firms in Region Reducing Output

India Ceylon
1930 Reduced Output 86% 76%
Reduced Output by 10% 56% 17%
1933 Reduced Output 89% 90%
Reduced Output by 15% 52% 51%

Note: The figure of 10 percent is used as the expected reduction. In 1930 the negotiated level varied between 15 percent and 3 percent depending on the quality of tea. In 1933 exports were to be reduced by 15 percent. Output reduction may be expected to be less as firms sell a share of the output in the domestic market.

Sources: Mincing Lane Tea & Rubber Brokers’ Association, A Guide to Investors and Investors’ India Year Books.

Further Readings:

Griffiths, Percival. The His­tory of the Indian Tea In­dustry. London: Weidenfeld and Nicolson, 1967

Gupta, Bishnupriya. “Collusion in the Indian Tea Industry in the Great Depression: An Analysis of Panel Data.” Explorations in Economic History 34, no. 2 (1997): 155-173.

Gupta, Bishnupriya. “The International Tea Cartel during the Great Depression, 1929-33.” Journal of Economic History 61, no.1 (2001): 144-159.

Macfarlane, Alan and Iris. Macfarlane, Green Gold: The Empire of Tea. London: Ebury Press, 2003.

Sarkar, Goutam. The World Tea Economy. Delhi: Oxford University Press, 1972.

Wickizer, Vernon D. Coffee, Tea and Cocoa: An Economic and Poli­ti­cal Analysis. Stanford: Stanford University Press, 1951.

Citation: Gupta, Bishnupriya. “The History of the International Tea Market, 1850-1945″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-the-international-tea-market-1850-1945/

The Economic History of Taiwan

Kelly Olds, National Taiwan University

Geography

Taiwan is a sub-tropical island, roughly 180 miles long, located less than 100 miles offshore of China’s Fujian province. Most of the island is covered with rugged mountains that rise to over 13,000 feet. These mountains rise directly out of the ocean along the eastern shore facing the Pacific so that this shore, and the central parts of the island are sparsely populated. Throughout its history, most of Taiwan’s people have lived on the Western Coastal Plain that faces China. This plain is crossed by east-west rivers, which occasionally bring floods of water down from the mountains creating broad boulder strewn flood plains. Until modern times, these rivers have made north-south travel costly and limited the island’s economic integration. The most important river is the Chuo Shuei-Hsi (between present-day Changhua and Yunlin counties), which has been an important economic and cultural divide.

Aboriginal Economy

Little is known about Taiwan prior to the seventeenth-century. When the Dutch came to the island in 1622, they found a population of roughly 70,000 Austronesian aborigines, at least 1,000 Chinese and a smaller number of Japanese. The aborigine women practiced subsistence agriculture while aborigine men harvested deer for export. The Chinese and Japanese population was primarily male and transient. Some of the Chinese were fishermen who congregated at the mouths of Taiwanese rivers but most Chinese and Japanese were merchants. Chinese merchants usually lived in aborigine villages and acted as middlemen, exporting deerskins, primarily to Japan, and importing salt and various manufactures. The harbor alongside which the Dutch built their first fort (in present-day Tainan City) was already an established place of rendezvous for Chinese and Japanese trade when the Dutch arrived.

Taiwan under the Dutch and Koxinga

The Dutch took control of most of Taiwan in a series of campaigns that lasted from the mid-1630s to the mid-1640s. The Dutch taxed the deerskin trade, hired aborigine men as soldiers and tried to introduce new forms of agriculture, but otherwise interfered little with the aborigine economy. The Tainan harbor grew in importance as an international entrepot. The most important change in the economy was an influx of about 35,000 Chinese to the island. These Chinese developed land, mainly in southern Taiwan, and specialized in growing rice and sugar. Sugar became Taiwan’s primary export. One of the most important Chinese investors in the Taiwanese economy was the leader of the Chinese community in Dutch Batavia (on Java) and during this period the Chinese economy on Taiwan bore a marked resemblance to the Batavian economy.

Koxinga, a Chinese-Japanese sea lord, drove the Dutch off the island in 1661. Under the rule of Koxinga and his heirs (1661-1683), Chinese settlement continued to spread in southern Taiwan. On the one hand, Chinese civilians made the crossing to flee the chaos that accompanied the Ming-Qing transition. On the other hand, Koxinga and his heirs brought over soldiers who were required to clear land and farm when they were not being used in wars. The Chinese population probably rose to about 120,000. Taiwan’s exports changed little, but the Tainan harbor lost importance as a center of international trade, as much of this trade now passed through Xiamen (Amoy), a port across the strait in Fujian that was also under the control of Koxinga and his heirs.

Taiwan under Qing Rule

The Qing dynasty defeated Koxinga’s grandson and took control of Taiwan in 1683. Taiwan remained part of the Chinese empire until it ceded the island to Japan in 1895. The Qing government originally saw control of Taiwan as an economic burden that had to be borne in order to keep the island out of the hand of pirates. In the first year of occupation, the Qing government shipped as many Chinese residents as possible back to the mainland. The island lost perhaps one-third of its Chinese population. Travel to Taiwan by all but male migrant workers was illegal until 1732 and this prohibition was reinstated off-and-on until it was finally permanently rescinded in 1788. However, the island’s Chinese population grew about two percent per year in the century following the Qing takeover. Both illegal immigration and natural increase were important components of this growth. The Qing government feared the expense of Chinese-aborigine confrontations and tried futilely to restrain Chinese settlement and keep the populations apart. Chinese pioneers, however, were constantly pushing the bounds of Chinese settlement northward and eastward and the aborigines were forced to adapt. Some groups permanently leased their land to Chinese settlers. Others learned Chinese farming skills and eventually assimilated or else moved toward the mountains where they continued hunting, learned to raise cattle or served as Qing soldiers. Due to the lack of Chinese women, intermarriage was also common.

Individual entrepreneurs or land companies usually organized Chinese pioneering enterprises. These people obtained land from aborigines or the government, recruited settlers, supplied loans to the settlers and sometimes invested in irrigation projects. Large land developers often lived in the village during the early years but moved to a city after the village was established. They remained responsible for paying the land tax and they received “large rents” from the settlers amounting to 10-15 percent of the expected harvest. However, they did not retain control of land usage or have any say in land sales or rental. The “large rents” were, in effect, a tax paid to a tax farmer who shared this revenue with the government. The payers of the large rents were the true owners who controlled the land. These people often chose to rent out their property to tenants who did the actual farming and paid a “small rent” of about 50 percent of the expected harvest.

Chinese pioneers made extensive use of written contracts but government enforcement of contracts was minimal. In the pioneers’ homeland across the strait, protecting property and enforcing agreements was usually a function of the lineage. Being part of a strong lineage was crucial to economic success and violent struggles among lineages were a problem endemic to south China. Taiwanese settlers had crossed the strait as individuals or in small groups and lacked strong lineages. Like other Chinese immigrants throughout the world, they created numerous voluntary associations based on one’s place of residence, occupation, place of origin, surname, etc. These organizations substituted for lineages in protecting property and enforcing contracts, and violent conflict among these associations over land and water rights was frequent. Due to property rights problems, land sales contracts often included the signature of not only the owner, but also his family and neighbors agreeing to the transfer. The difficulty of seizing collateral led to the common use of “conditional sales” as a means of borrowing money. Under the terms of a conditional sale, the lender immediately took control of the borrower’s property and retained the right to the property’s production in lieu of rent until the borrower paid back the loan. Since the borrower could wait an indefinite period of time before repaying the loan, this led to an awkward situation in which the person who controlled the land did not have permanent ownership and had no incentive to invest in land improvements.

Taiwan prospered during a sugar boom in the early eighteenth century, but afterwards its sugar industry had a difficult time keeping up with advances in foreign production. Until the Japanese occupation in 1895, Taiwan’s sugar farms and sugar mills remained small-scale operations. The sugar industry was centered in the south of the island and throughout the nineteenth century, the southern population showed little growth and may have declined. By the end of the nineteenth century, the south of the island was poorer than the north of the island and its population was shorter in stature and had a lower life expectancy. The north of the island was better suited to rice production and the northern economy seems to have grown robustly. As the Chinese population moved into the foothills of the northern mountains in the mid-nineteenth century, they began growing tea, which added to the north’s economic vitality and became the island’s leading export during the last quarter of the nineteenth century. The tea industry’s most successful product was oolong tea produced primarily for the U.S. market.

During the last years of the Qing dynasty’s rule in Taiwan, Taiwan was made a full province of China and some attempts were made to modernize the island by carrying out a land survey and building infrastructure. Taiwan’s first railroad was constructed linking several cities in the north.

Taiwan under Japanese Rule

The Japanese gained control of Taiwan in 1895 after the Sino-Japanese War. After several years of suppressing both Chinese resistance and banditry, the Japanese began to modernize the island’s economy. A railroad was constructed running the length of the island and modern roads and bridges were built. A modern land survey was carried out. Large rents were eliminated and those receiving these rents were compensated with bonds. Ownership of approximately twenty percent of the land could not be established to Japanese satisfaction and was confiscated. Much of this land was given to Japanese conglomerates that wanted land for sugarcane. Several banks were established and reorganized irrigation districts began borrowing money to make improvements. Since many Japanese soldiers had died of disease, improving the island’s sanitation and disease environment was also a top priority.

Under the Japanese, Taiwan remained an agricultural economy. Although sugarcane continued to be grown mainly on family farms, sugar processing was modernized and sugar once again became Taiwan’s leading export. During the early years of modernization, native Taiwanese sugar refiners remained important but, largely due to government policy, Japanese refiners holding regional monopsony power came to control the industry. Taiwanese sugar remained uncompetitive on the international market, but was sold duty free within the protected Japanese market. Rice, also bound for the protected Japanese market, displaced tea to become the second major export crop. Altogether, almost half of Taiwan’s agricultural production was being exported in the 1930s. After 1935, the government began encouraging investment in non-agricultural industry on the island. The war that followed was a time of destruction and economic collapse.

Growth in Taiwan’s per-capita economic product during this colonial period roughly kept up with that of Japan. Population also grew quickly as health improved and death rates fell. The native Taiwanese population’s per-capita consumption grew about one percent per year, slower than the growth in consumption in Japan, but greater than the growth in China. Better property rights enforcement, population growth, transportation improvements and protected agricultural markets caused the value of land to increase quickly, but real wage rates increased little. Most Taiwanese farmers did own some land but since the poor were more dependent on wages, income inequality increased.

Taiwan Under Nationalist Rule

Taiwan’s economy recovered from the war slower than the Japanese economy. The Chinese Nationalist government took control of Taiwan in 1945 and lost control of their original territory on the mainland in 1949. The Japanese population, which had grown to over five percent of Taiwan’s population (and a much greater proportion of Taiwan’s urban population), was shipped to Japan and the new government confiscated Japanese property creating large public corporations. The late 1940s was a period of civil war in China, and Taiwan also experienced violence and hyperinflation. In 1949, soldiers and refugees from the mainland flooded onto the island increasing Taiwan’s population by about twenty percent. Mainlanders tended to settle in cities and were predominant in the public sector.

In the 1950s, Taiwan was dependent on American aid, which allowed its government to maintain a large military without overburdening the economy. Taiwan’s agricultural economy was left in shambles by the events of the 1940s. It had lost its protected Japanese markets and the low-interest-rate formal-sector loans to which even tenant farmers had access in the 1930s were no longer available. With American help, the government implemented a land reform program. This program (1) sold public land to tenant farmers, (2) limited rent to 37.5% of the expected harvest and (3) severely restricted the size of individual landholdings forcing landlords to sell most of their land to the government in exchange for stocks and bonds valued at 2.5 times the land’s annual expected harvest. This land was then redistributed. The land reform increased equality among the farm population and strengthened government control of the countryside. Its justice and effect on agricultural investment and productivity are still hotly debated.

High-speed growth accompanied by quick industrialization began in the late-1950s. Taiwan became known for its cheap manufactured exports produced by small enterprises bound together by flexible sub-contracting networks. Taiwan’s postwar industrialization is usually attributed to (1) the decline in land per capita, (2) the change in export markets and (3) government policy. Between 1940 and 1962, Taiwan’s population increased at an annual rate of slightly over three percent. This cut the amount of land per capita in half. Taiwan’s agricultural exports had been sold tariff-free at higher-than-world-market prices in pre-war Japan while Taiwan’s only important pre-war manufactured export, imitation panama hats, faced a 25% tariff in the U.S., their primary market. After the war, agricultural products generally faced the greatest trade barriers. As for government policy, Taiwan went through a period of import substitution policy in the 1950s, followed by promotion of manufactured exports in the 1960s and 1970s. Subsidies were available for certain manufactures under both regimes. During the import substitution regime, domestic manufactures were protected both by tariffs and multiple overvalued exchange rates. Under the later export promotion regime, export processing zones were set up in which privileges were extended to businesses which produced products which would not be sold domestically.

Historical research into the “Taiwanese miracle” has focused on government policy and its effects, but statistical data for the first few post-war decades is poor and the overall effect of the various government policies is unclear. During the 1960s and 1970s, real GDP grew about 10% (7% per capita) each year. Most of this growth can be explained by increases in factors of production. Savings rates began rising after the currency was stabilized and reached almost 30% by 1970. Meanwhile, primary education, in which 70% of Taiwanese children had participated under the Japanese, became universal, and students in higher education increased many-fold. Although recent research has emphasized the importance of factor growth in the Asian “miracle economies,” studies show that productivity also grew substantially in Taiwan.

Further Reading

Chang, Han-Yu and Ramon Myers. “Japanese Colonial Development Policy in Taiwan, 1895-1906.” Journal of Asian Studies 22, no. 4 (August 1963): 433-450.

Davidson, James. The Island of Formosa: Past and Present. London: MacMillan & Company, 1903.

Fei, John et.al. Growth with Equity: The Taiwan Case. New York: Oxford University Press, 1979.

Gardella, Robert. Harvesting Mountains: Fujian and the China Tea Trade, 1757-1937. Berkeley: University of California Press, 1994.

Ho, Samuel. Economic Development of Taiwan 1860-1970. New Haven: Yale University Press, 1978.

Ho, Yhi-Min. Agricultural Development of Taiwan, 1903-1960. Nashville: Vanderbilt University Press, 1966.

Ka, Chih-Ming. Japanese Colonialism in Taiwan: Land Tenure, Development, and Dependency, 1895-1945. Boulder: Westview Press, 1995.

Knapp, Ronald, editor. China’s Island Frontier: Studies in the Historical Geography of Taiwan. Honolulu: University Press of Hawaii, 1980.

Li, Kuo-Ting. The Evolution of Policy Behind Taiwan’s Development Success. New Haven: Yale University Press, 1988.

Koo Hui-Wen and Chun-Chieh Wang. “Indexed Pricing: Sugarcane Price Guarantees in Colonial Taiwan, 1930-1940.” Journal of Economic History 59, no. 4 (December 1999): 912-926.

Mazumdar, Sucheta. Sugar and Society in China: Peasants, Technology, and the World Market. Cambridge, MA: Harvard University Asia Center, 1998.

Meskill, Johanna. A Chinese Pioneer Family: The Lins of Wu-feng, Taiwan, 1729-1895. Princeton, NJ: Princeton University Press, 1979.

Ng, Chin-Keong. Trade and Society: The Amoy Network on the China Coast 1683-1735. Singapore: Singapore University Press, 1983.

Olds, Kelly. “The Risk Premium Differential in Japanese-Era Taiwan and Its Effect.” Journal of Institutional and Theoretical Economics 158, no. 3 (September 2002): 441-463.

Olds, Kelly. “The Biological Standard of Living in Taiwan under Japanese Occupation.” Economics and Human Biology, 1 (2003): 1-20.

Olds, Kelly and Ruey-Hua Liu. “Economic Cooperation in Nineteenth-Century Taiwan.” Journal of Institutional and Theoretical Economics 156, no. 2 (June 2000): 404-430.

Rubinstein, Murray, editor. Taiwan: A New History. Armonk, NY: M.E. Sharpe, 1999.

Shepherd, John. Statecraft and Political Economy on the Taiwan Frontier, 1600-1800. Stanford: Stanford University Press, 1993.

Citation: Olds, Kelly. “The Economic History of Taiwan”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-taiwan/

Sweden – Economic Growth and Structural Change, 1800-2000

Lennart Schön, Lund University

This article presents an overview of Swedish economic growth performance internationally and statistically and an account of major trends in Swedish economic development during the nineteenth and twentieth centuries.1

Modern economic growth in Sweden took off in the middle of the nineteenth century and in international comparative terms Sweden has been rather successful during the past 150 years. This is largely thanks to the transformation of the economy and society from agrarian to industrial. Sweden is a small economy that has been open to foreign influences and highly dependent upon the world economy. Thus, successive structural changes have put their imprint upon modern economic growth.

Swedish Growth in International Perspective

The century-long period from the 1870s to the 1970s comprises the most successful part of Swedish industrialization and growth. On a per capita basis the Japanese economy performed equally well (see Table 1). The neighboring Scandinavian countries also grew rapidly but at a somewhat slower rate than Sweden. Growth in the rest of industrial Europe and in the U.S. was clearly outpaced. Growth in the entire world economy, as measured by Maddison, was even slower.

Table 1 Annual Economic Growth Rates per Capita in Industrial Nations and the World Economy, 1871-2005

Year Sweden Rest of Nordic Countries Rest of Western Europe United States Japan World Economy
1871/1875-1971/1975 2.4 2.0 1.7 1.8 2.4 1.5
1971/1975-2001/2005 1.7 2.2 1.9 2.0 2.2 1.6

Note: Rest of Nordic countries = Denmark, Finland and Norway. Rest of Western Europe = Austria, Belgium, Britain, France, Germany, Italy, the Netherlands, and Switzerland.

Source: Maddison (2006); Krantz/Schön (forthcoming 2007); World Bank, World Development Indicator 2000; Groningen Growth and Development Centre, www.ggdc.com.

The Swedish advance in a global perspective is illustrated in Figure 1. In the mid-nineteenth century the Swedish average income level was close to the average global level (as measured by Maddison). In a European perspective Sweden was a rather poor country. By the 1970s, however, the Swedish income level was more than three times higher than the global average and among the highest in Europe.

Figure 1
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
(Nine year moving averages)
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
Sources: Maddison (2006); Krantz/Schön (forthcoming 2007).

Note. The annual variation in world production between Maddison’s benchmarks 1870, 1913 and 1950 is estimated from his supply of annual country series.

To some extent this was a catch-up story. Sweden was able to take advantage of technological and organizational advances made in Western Europe and North America. Furthermore, Scandinavian countries with resource bases such as Sweden and Finland had been rather disadvantaged as long as agriculture was the main source of income. The shift to industry expanded the resource base and industrial development – directed both to a growing domestic market but even more to a widening world market – became the main lever of growth from the late nineteenth century.

Catch-up is not the whole story, though. In many industrial areas Swedish companies took a position at the technological frontier from an early point in time. Thus, in certain sectors there was also forging ahead,2 quickening the pace of structural change in the industrializing economy. Furthermore, during a century of fairly rapid growth new conditions have arisen that have required profound adaptation and a renewal of entrepreneurial activity as well as of economic policies.

The slow down in Swedish growth from the 1970s may be considered in this perspective. While in most other countries growth from the 1970s fell only in relation to growth rates in the golden post-war ages, Swedish growth fell clearly below the historical long run growth trend. It also fell to a very low level internationally. The 1970s certainly meant the end to a number of successful growth trajectories in the industrial society. At the same time new growth forces appeared with the electronic revolution, as well as with the advance of a more service based economy. It may be the case that this structural change hit the Swedish economy harder than most other economies, at least of the industrial capitalist economies. Sweden was forced into a transformation of its industrial economy and of its political economy in the 1970s and the 1980s that was more profound than in most other Western economies.

A Statistical Overview, 1800-2000

Swedish economic development since 1800 may be divided into six periods with different growth trends, as well as different composition of growth forces.

Table 2 Annual Growth Rates in per Capita Production, Total Investments, Foreign Trade and Population in Sweden, 1800-2000

Period Per capita GDP Investments Foreign Trade Population
1800-1840 0.6 0.3 0.7 0.8
1840-1870 1.2 3.0 4.6 1.0
1870-1910 1.7 3.0 3.3 0.6
1910-1950 2.2 4.2 2.0 0.5
1950-1975 3.6 5.5 6.5 0.6
1975-2000 1.4 2.1 4.3 0.4
1800-2000 1.9 3.4 3.8 0.7

Source: Krantz/Schön (forthcoming 2007).

In the first decades of the nineteenth century the agricultural sector dominated and growth was slow in all aspects but in population. Still there was per capita growth, but to some extent this was a recovery from the low levels during the Napoleonic Wars. The acceleration during the next period around the mid-nineteenth century is marked in all aspects. Investments and foreign trade became very dynamic ingredients with the onset of industrialization. They were to remain so during the following periods as well. Up to the 1970s per capita growth rates increased for each successive period. In an international perspective it is most notable that per capita growth rates increased also in the interwar period, despite the slow down in foreign trade. The interwar period is crucial for the long run relative success of Swedish economic growth. The decisive culmination in the post-war period with high growth rates in investments and in foreign trade stands out as well, as the deceleration in all aspects in the late twentieth century.

An analysis in a traditional growth accounting framework gives a long term pattern with certain periodic similarities (see Table 3). Thus, total factor productivity growth has increased over time up to the 1970s, only to decrease to its long run level in the last decades. This deceleration in productivity growth may be looked upon either as a failure of the “Swedish Model” to accommodate new growth forces or as another case of the “productivity paradox” in lieu of the information technology revolution.3

Table 3 Total Factor Productivity (TFP) Growth and Relative Contribution of Capital, Labor and TFP to GDP Growth in Sweden, 1840-2000

Period TFP Growth Capital Labor TFP
1840-1870 0.4 55 27 18
1870-1910 0.7 50 18 32
1910-1950 1.0 39 24 37
1950-1975 2.1 45 7 48
1975-2000 1.0 44 1 55
1840-2000 1.1 45 16 39

Source: See Table 2.

In terms of contribution to overall growth, TFP has increased its share for every period. The TFP share was low in the 1840s but there was a very marked increase with the onset of modern industrialization from the 1870s. In relative terms TFP reached its highest level so far from the 1970s, thus indicating an increasing role of human capital, technology and knowledge in economic growth. The role of capital accumulation was markedly more pronounced in early industrialization with the build-up of a modern infrastructure and with urbanization, but still capital did retain much of its importance during the twentieth century. Thus its contribution to growth during the post-war Golden Ages was significant with very high levels of material investments. At the same time TFP growth culminated with positive structural shifts, as well as increased knowledge intensity complementary to the investments. Labor has in quantitative terms progressively reduced its role in economic growth. One should observe, however, the relatively large importance of labor in Swedish economic growth during the interwar period. This was largely due to demographic factors and to the employment situation that will be further commented upon.

In the first decades of the nineteenth century, growth was still led by the primary production of agriculture, accompanied by services and transport. Secondary production in manufacturing and building was, on the contrary, very stagnant. From the 1840s the industrial sector accelerated, increasingly supported by transport and communications, as well as by private services. The sectoral shift from agriculture to industry became more pronounced at the turn of the twentieth century when industry and transportation boomed, while agricultural growth decelerated into subsequent stagnation. In the post-war period the volume of services, both private and public, increased strongly, although still not outpacing industry. From the 1970s the focus shifted to private services and to transport and communications, indicating fundamental new prerequisites of growth.

Table 4 Growth Rates of Industrial Sectors, 1800-2000

Period Agriculture Industrial and Hand Transport and Communic. Building Private Services Public Services GDP
1800-1840 1.5 0.3 1.1 -0.1 1.4 1.5 1.3
1840-1870 2.1 3.7 1.8 2.4 2.7 0.8 2.3
1870-1910 1.0 5.0 3.9 1.3 2.7 1.0 2.3
1910-1950 0.0 3.5 4.9 1.4 2.2 2.2 2.7
1950-1975 0.4 5.1 4.4 3.8 4.3 4.0 4.3
1975-2000 -0.4 1.9 2.6 -0.8 2.2 0.2 1.8
1800-2000 0.9 3.8 3.7 1.8 2.7 1.7 2.6

Source: See Table 2.

Note: Private services are exclusive of dwelling services.

Growth and Transformation in the Agricultural Society of the Early Nineteenth Century

During the first half of the nineteenth century the agricultural sector and the rural society dominated the Swedish economy. Thus, more than three-quarters of the population were occupied in agriculture while roughly 90 percent lived in the countryside. Many non-agrarian activities such as the iron industry, the saw mill industry and many crafts as well as domestic, religious and military services were performed in rural areas. Although growth was slow, a number of structural and institutional changes occurred that paved the way for future modernization.

Most important was the transformation of agriculture. From the late eighteenth century commercialization of the primary sector intensified. Particularly during the Napoleonic Wars, the domestic market for food stuffs widened. The population increase in combination with the temporary decrease in imports stimulated enclosures and reclamation of land, the introduction of new crops and new methods and above all it stimulated a greater degree of market orientation. In the decades after the war the traditional Swedish trade deficit in grain even shifted to a trade surplus with an increasing exportation of oats, primarily to Britain.

Concomitant with the agricultural transformation were a number of infrastructural and institutional changes. Domestic transportation costs were reduced through investments in canals and roads. Trade of agricultural goods was liberalized, reducing transaction costs and integrating the domestic market even further. Trading companies became more effective in attracting agricultural surpluses for more distant markets. In support of the agricultural sector new means of information were introduced by, for example, agricultural societies that published periodicals on innovative methods and on market trends. Mortgage societies were established to supply agriculture with long term capital for investments that in turn intensified the commercialization of production.

All these elements meant a profound institutional change in the sense that the price mechanism became much more effective in directing human behavior. Furthermore, a greater interest in information and in the main instrument of information, namely literacy, was infused. Traditionally, popular literacy had been upheld by the church, mainly devoted to knowledge of the primary Lutheran texts. In the new economic environment, literacy was secularized and transformed into a more functional literacy marked by the advent of schools for public education in the 1840s.

The Breakthrough of Modern Economic Growth in the Mid-nineteenth Century

In the decades around the middle of the nineteenth century new dynamic forces appeared that accelerated growth. Most notably foreign trade expanded by leaps and bounds in the 1850s and 1860s. With new export sectors, industrial investments increased. Furthermore, railways became the most prominent component of a new infrastructure and with this construction a new component in Swedish growth was introduced, heavy capital imports.

The upswing in industrial growth in Western Europe during the 1850s, in combination with demand induced through the Crimean War, led to a particularly strong expansion in Swedish exports with sharp price increases for three staple goods – bar iron, wood and oats. The charcoal-based Swedish bar iron had been the traditional export good and had completely dominated Swedish exports until mid-nineteenth century. Bar iron met, however, increasingly strong competition from British and continental iron and steel industries and Swedish exports had stagnated in the first half of the nineteenth century. The upswing in international demand, following the diffusion of industrialization and railway construction, gave an impetus to the modernization of Swedish steel production in the following decades.

The saw mill industry was a really new export industry that grew dramatically in the 1850s and 1860s. Up until this time, the vast forests in Sweden had been regarded mainly as a fuel resource for the iron industry and for household heating and local residential construction. With sharp price increases on the Western European market from the 1840s and 1850s, the resources of the sparsely populated northern part of Sweden suddenly became valuable. A formidable explosion of saw mill construction at the mouths of the rivers along the northern coastline followed. Within a few decades Swedish merchants, as well as Norwegian, German, British and Dutch merchants, became saw mill owners running large-scale capitalist enterprises at the fringe of the European civilization.

Less dramatic but equally important was the sudden expansion of Swedish oat exports. The market for oats appeared mainly in Britain, where short-distance transportation in rapidly growing urban centers increased the fleet of horses. Swedish oats became an important energy resource during the decades around the mid-nineteenth century. In Sweden this had a special significance since oats could be cultivated on rather barren and marginal soils and Sweden was richly endowed with such soils. Thus, the market for oats with strongly increasing prices stimulated further the commercialization of agriculture and the diffusion of new methods. It was furthermore so since oats for the market were a substitute for local flax production – also thriving on barren soils – while domestic linen was increasingly supplanted by factory-produced cotton goods.

The Swedish economy was able to respond to the impetus from Western Europe during these decades, to diffuse the new influences in the economy and to integrate them in its development very successfully. The barriers to change seem to have been weak. This is partly explained by the prior transformation of agriculture and the evolution of market institutions in the rural economy. People reacted to the price mechanism. New social classes of commercial peasants, capitalists and wage laborers had emerged in an era of domestic market expansion, with increased regional specialization, and population increase.

The composition of export goods also contributed to the diffusion of participation and to the diffusion of export income. Iron, wood and oats meant both a regional and a social distribution. The value of prior marginal resources such as soils in the south and forests in the north was inflated. The technology was simple and labor intensive in industry, forestry, agriculture and transportation. The demand for unskilled labor increased strongly that was to put an imprint upon Swedish wage development in the second half of the nineteenth century. Commercial houses and industrial companies made profits but export income was distributed to many segments of the population.

The integration of the Swedish economy was further enforced through initiatives taken by the State. The parliament decision in the 1850s to construct the railway trunk lines meant, first, a more direct involvement by the State in the development of a modern infrastructure and, second, new principles of finance since the State had to rely upon capital imports. At the same time markets for goods, labor and capital were liberalized and integration both within Sweden and with the world market deepened. The Swedish adoption of the Gold Standard in 1873 put a final stamp on this institutional development.

A Second Industrial Revolution around 1900

In the late nineteenth century, particularly in the 1880s, international competition became fiercer for agriculture and early industrial branches. The integration of world markets led to falling prices and stagnation in the demand for Swedish staple goods such as iron, sawn wood and oats. Profits were squeezed and expansion thwarted. On the other hand there arose new markets. Increasing wages intensified mechanization both in agriculture and in industry. The demand increased for more sophisticated machinery equipment. At the same time consumer demand shifted towards better foodstuff – such as milk, butter and meat – and towards more fabricated industrial goods.

The decades around the turn of the twentieth century meant a profound structural change in the composition of Swedish industrial expansion that was crucial for long term growth. New and more sophisticated enterprises were founded and expanded particularly from the 1890s, in the upswing after the Baring Crisis.

The new enterprises were closely related to the so called Second Industrial Revolution in which scientific knowledge and more complex engineering skills were main components. The electrical motor became especially important in Sweden. A new development block was created around this innovation that combined engineering skills in companies such as ASEA (later ABB) with a large demand in energy-intensive processes and with the large supply of hydropower in Sweden.4 Financing the rapid development of this large block engaged commercial banks, knitting closer ties between financial capital and industry. The State, once again, engaged itself in infrastructural development in support of electrification, still resorting to heavy capital imports.

A number of innovative industries were founded in this period – all related to increased demand for mechanization and engineering skills. Companies such as AGA, ASEA, Ericsson, Separator (AlfaLaval) and SKF have been labeled “enterprises of genius” and all are represented with renowned inventors and innovators. This was, of course, not an entirely Swedish phenomenon. These branches developed simultaneously on the Continent, particularly in nearby Germany and in the U.S. Knowledge and innovative stimulus was diffused among these economies. The question is rather why this new development became so strong in Sweden so that new industries within a relatively short period of time were able to supplant old resource-based industries as main driving forces of industrialization.

Traditions of engineering skills were certainly important, developed in old heavy industrial branches such as iron and steel industries and stimulated further by State initiatives such as railway construction or, more directly, the founding of the Royal Institute of Technology. But apart from that the economic development in the second half of the nineteenth century fundamentally changed relative factor prices and the profitability of allocation of resources in different lines of production.

The relative increase in the wages of unskilled labor had been stimulated by the composition of early exports in Sweden. This was much reinforced by two components in the further development – emigration and capital imports.

Within approximately the same period, 1850-1910, the Swedish economy received a huge amount of capital mainly from Germany and France, while delivering an equally huge amount of labor to primarily the U.S. Thus, Swedish relative factor prices changed dramatically. Swedish interest rates remained at rather high levels compared to leading European countries until 1910, due to a continuous large demand for capital in Sweden, but relative wages rose persistently (see Table 5). As in the rest of Scandinavia, wage increases were much stronger than GDP growth in Sweden indicating a shift in income distribution in favor of labor, particularly in favor of unskilled labor, during this period of increased world market integration.

Table 5 Annual Increase in Real Wages of Unskilled Labor and Annual GDP Growth per Capita, 1870-1910

Country Annual real wage increase, 1870-1910 Annual GDP growth per capita, 1870-1910
Sweden 2.8 1.7
Denmark and Norway 2.6 1.3
France, Germany and Great Britain 1.1 1.2
United States 1.1 1.6

Sources: Wages from Williamson (1995); GDP growth see Table 1.

Relative profitability fell in traditional industries, which exploited rich natural resources and cheap labor, while more sophisticated industries were favored. But the causality runs both ways. Had this structural shift with the growth of new and more profitable industries not occurred, the Swedish economy would not have been able to sustain the wage increase.5

Accelerated Growth in the War-stricken Period, 1910-1950

The most notable feature of long term Swedish growth is the acceleration in growth rates during the period 1910-1950, which in Europe at large was full of problems and catastrophes.6 Thus, Swedish per capita production grew at 2.2 percent annually while growth in the rest of Scandinavia was somewhat below 2 percent and in the rest of Europe hovered at 1 percent. The Swedish acceleration was based mainly on three pillars.

First, the structure created at the end of the nineteenth century was very viable, with considerable long term growth potential. It consisted of new industries and new infrastructures that involved industrialists and financial capitalists, as well as public sector support. It also involved industries meeting a relatively strong demand in war times, as well as in the interwar period, both domestically and abroad.

Second, the First World War meant an immense financial bonus to the Swedish market. A huge export surplus at inflated prices during the war led to the domestication of the Swedish national debt. This in turn further capitalized the Swedish financial market, lowering interest rates and ameliorating sequential innovative activity in industry. A domestic money market arose that provided the State with new instruments for economic policy that were to become important for the implementation of the new social democratic “Keynesian” policies of the 1930s.

Third, demographic development favored the Swedish economy in this period. The share of the economically active age group 15-64 grew substantially. This was due partly to the fact that prior emigration had sized down cohorts that now would have become old age pensioners. Comparatively low mortality of young people during the 1910s, as well as an end to mass emigration further enhanced the share of the active population. Both the labor market and domestic demand was stimulated in particular during the 1930s when the household forming age group of 25-30 years increased.

The augmented labor supply would have increased unemployment had it not been combined with the richer supply of capital and innovative industrial development that met elastic demand both domestically and in Europe.

Thus, a richer supply of both capital and labor stimulated the domestic market in a period when international market integration deteriorated. Above all it stimulated the development of mass production of consumption goods based upon the innovations of the Second Industrial Revolution. Significant new enterprises that emanated from the interwar period were very much related to the new logic of the industrial society, such as Volvo, SAAB, Electrolux, Tetra Pak and IKEA.

The Golden Age of Growth, 1950-1975

The Swedish economy was clearly part of the European Golden Age of growth, although Swedish acceleration from the 1950s was less pronounced than in the rest of Western Europe, which to a much larger extent had been plagued by wars and crises.7 The Swedish post-war period was characterized primarily by two phenomena – the full fruition of development blocks based upon the great innovations of the late nineteenth century (the electrical motor and the combustion engine) and the cementation of the “Swedish Model” for the welfare state. These two phenomena were highly complementary.

The Swedish Model had basically two components. One was a greater public responsibility for social security and for the creation and preservation of human capital. This led to a rapid increase in the supply of public services in the realms of education, health and children’s day care as well as to increases in social security programs and in public savings for transfers to pensioners program. The consequence was high taxation. The other component was a regulation of labor and capital markets. This was the most ingenious part of the model, constructed to sustain growth in the industrial society and to increase equality in combination with the social security program and taxation.

The labor market program was the result of negotiations between trade unions and the employers’ organization. It was labeled “solidaristic wage policy” with two elements. One was to achieve equal wages for equal work, regardless of individual companies’ ability to pay. The other element was to raise the wage level in low paid areas and thus to compress the wage distribution. The aim of the program was actually to increase the speed in the structural rationalization of industries and to eliminate less productive companies and branches. Labor should be transferred to the most productive export-oriented sectors. At the same time income should be distributed more equally. A drawback of the solidaristic wage policy from an egalitarian point of view was that profits soared in the productive sectors since wage increases were held back. However, capital market regulations hindered the ability of high profits to be converted into very high incomes for shareholders. Profits were taxed very low if they were converted into further investments within the company (the timing in the use of the funds was controlled by the State in its stabilization policy) but taxed heavily if distributed to share holders. The result was that investments within existing profitable companies were supported and actually subsidized while the mobility of capital dwindled and the activity at the stock market fell.

As long as the export sectors grew, the program worked well.8 Companies founded in the late nineteenth century and in the interwar period developed into successful multinationals in engineering with machinery, auto industries and shipbuilding, as well as in resource-based industries of steel and paper. The expansion of the export sector was the main force behind the high growth rates and the productivity increases but the sector was strongly supported by public investments or publicly subsidized investments in infrastructure and residential construction.

Hence, during the Golden Age of growth the development blocks around electrification and motorization matured in a broad modernization of the society, where mass consumption and mass production was supported by social programs, by investment programs and by labor market policy.

Crisis and Restructuring from the 1970s

In the 1970s and early 1980s a number of industries – such as steel works, pulp and paper, shipbuilding, and mechanical engineering – ran into crisis. New global competition, changing consumer behavior and profound innovative renewal, especially in microelectronics, made some of the industrial pillars of the Swedish Model crumble. At the same time the disadvantages of the old model became more apparent. It put obstacles to flexibility and to entrepreneurial initiatives and it reduced individual incentives for mobility. Thus, while the Swedish Model did foster rationalization of existing industries well adapted to the post-war period, it did not support more profound transformation of the economy.

One should not exaggerate the obstacles to transformation, though. The Swedish economy was still very open in the market for goods and many services, and the pressure to transform increased rapidly. During the 1980s a far-reaching structural change within industry as well as in economic policy took place, engaging both private and public actors. Shipbuilding was almost completely discontinued, pulp industries were integrated into modernized paper works, the steel industry was concentrated and specialized, and the mechanical engineering was digitalized. New and more knowledge-intensive growth industries appeared in the 1980s, such as IT-based telecommunication, pharmaceutical industries, and biotechnology, as well as new service industries.

During the 1980s some of the constituent components of the Swedish model were weakened or eliminated. Centralized negotiations and solidaristic wage policy disappeared. Regulations in the capital market were dismantled under the pressure of increasing international capital flows simultaneously with a forceful revival of the stock market. The expansion of public sector services came to an end and the taxation system was reformed with a reduction of marginal tax rates. Thus, Swedish economic policy and welfare system became more adapted to the main European level that facilitated the Swedish application of membership and final entrance into the European Union in 1995.

It is also clear that the period from the 1970s to the early twenty-first century comprise two growth trends, before and after 1990 respectively. During the 1970s and 1980s, growth in Sweden was very slow and marked by the great structural problems that the Swedish economy had to cope with. The slow growth prior to 1990 does not signify stagnation in a real sense, but rather the transformation of industrial structures and the reformulation of economic policy, which did not immediately result in a speed up of growth but rather in imbalances and bottle necks that took years to eliminate. From the 1990s up to 2005 Swedish growth accelerated quite forcefully in comparison with most Western economies.9 Thus, the 1980s may be considered as a Swedish case of “the productivity paradox,” with innovative renewal but with a delayed acceleration of productivity and growth from the 1990s – although a delayed productivity effect of more profound transformation and radical innovative behavior is not paradoxical.

Table 6 Annual Growth Rates per Capita, 1971-2005

Period Sweden Rest of Nordic Countries Rest of Western Europe United States World Economy
1971/1975-1991/1995 1.2 2.1 1.8 1.6 1.4
1991/1995-2001/2005 2.4 2.5 1.7 2.1 2.1

Sources: See Table 1.

The recent acceleration in growth may also indicate that some of the basic traits from early industrialization still pertain to the Swedish economy – an international attitude in a small open economy fosters transformation and adaptation of human skills to new circumstances as a major force behind long term growth.

References

Abramovitz, Moses. “Catching Up, Forging Ahead and Falling Behind.” Journal of Economic History 46, no. 2 (1986): 385-406.

Dahmén, Erik. “Development Blocks in Industrial Economics.” Scandinavian Economic History Review 36 (1988): 3-14.

David, Paul A. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review 80, no. 2 (1980): 355-61.

Eichengreen, Barry. “Institutions and Economic Growth: Europe after World War II.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. New York: Cambridge University Press, 1996.

Krantz, Olle and Lennart Schön. Swedish Historical National Accounts, 1800-2000. Lund: Almqvist and Wiksell International (forthcoming, 2007).

Maddison, Angus. The World Economy, Volumes 1 and 2. Paris: OECD (2006).

Schön, Lennart. “Development Blocks and Transformation Pressure in a Macro-Economic Perspective: A Model of Long-Cyclical Change.” Skandinaviska Enskilda Banken Quarterly Review 20, no. 3-4 (1991): 67-76.

Schön, Lennart. “External and Internal Factors in Swedish Industrialization.” Scandinavian Economic History Review 45, no. 3 (1997): 209-223.

Schön, Lennart. En modern svensk ekonomisk historia: Tillväxt och omvandling under två sekel (A Modern Swedish Economic History: Growth and Transformation in Two Centuries). Stockholm: SNS (2000).

Schön, Lennart. “Total Factor Productivity in Swedish Manufacturing in the Period 1870-2000.” In Exploring Economic Growth: Essays in Measurement and Analysis: A Festschrift for Riitta Hjerppe on Her Sixtieth Birthday, edited by S. Heikkinen and J.L. van Zanden. Amsterdam: Aksant, 2004.

Schön, Lennart. “Swedish Industrialization 1870-1930 and the Heckscher-Ohlin Theory.” In Eli Heckscher, International Trade, and Economic History, edited by Ronald Findlay et al. Cambridge, MA: MIT Press (2006).

Svennilson, Ingvar. Growth and Stagnation in the European Economy. Geneva: United Nations Economic Commission for Europe, 1954.

Temin, Peter. “The Golden Age of European Growth Reconsidered.” European Review of Economic History 6, no. 1 (2002): 3-22.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32, no. 2 (1995): 141-96.

Citation: Schön, Lennart. “Sweden – Economic Growth and Structural Change, 1800-2000″. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/sweden-economic-growth-and-structural-change-1800-2000/

The 1929 Stock Market Crash

Harold Bierman, Jr., Cornell University

Overview

The 1929 stock market crash is conventionally said to have occurred on Thursday the 24th and Tuesday the 29th of October. These two dates have been dubbed “Black Thursday” and “Black Tuesday,” respectively. On September 3, 1929, the Dow Jones Industrial Average reached a record high of 381.2. At the end of the market day on Thursday, October 24, the market was at 299.5 — a 21 percent decline from the high. On this day the market fell 33 points — a drop of 9 percent — on trading that was approximately three times the normal daily volume for the first nine months of the year. By all accounts, there was a selling panic. By November 13, 1929, the market had fallen to 199. By the time the crash was completed in 1932, following an unprecedentedly large economic depression, stocks had lost nearly 90 percent of their value.

The events of Black Thursday are normally defined to be the start of the stock market crash of 1929-1932, but the series of events leading to the crash started before that date. This article examines the causes of the 1929 stock market crash. While no consensus exists about its precise causes, the article will critique some arguments and support a preferred set of conclusions. It argues that one of the primary causes was the attempt by important people and the media to stop market speculators. A second probable cause was the great expansion of investment trusts, public utility holding companies, and the amount of margin buying, all of which fueled the purchase of public utility stocks, and drove up their prices. Public utilities, utility holding companies, and investment trusts were all highly levered using large amounts of debt and preferred stock. These factors seem to have set the stage for the triggering event. This sector was vulnerable to the arrival of bad news regarding utility regulation. In October 1929, the bad news arrived and utility stocks fell dramatically. After the utilities decreased in price, margin buyers had to sell and there was then panic selling of all stocks.

The Conventional View

The crash helped bring on the depression of the thirties and the depression helped to extend the period of low stock prices, thus “proving” to many that the prices had been too high.

Laying the blame for the “boom” on speculators was common in 1929. Thus, immediately upon learning of the crash of October 24 John Maynard Keynes (Moggridge, 1981, p. 2 of Vol. XX) wrote in the New York Evening Post (25 October 1929) that “The extraordinary speculation on Wall Street in past months has driven up the rate of interest to an unprecedented level.” And the Economist when stock prices reached their low for the year repeated the theme that the U.S. stock market had been too high (November 2, 1929, p. 806): “there is warrant for hoping that the deflation of the exaggerated balloon of American stock values will be for the good of the world.” The key phrases in these quotations are “exaggerated balloon of American stock values” and “extraordinary speculation on Wall Street.” Likewise, President Herbert Hoover saw increasing stock market prices leading up to the crash as a speculative bubble manufactured by the mistakes of the Federal Reserve Board. “One of these clouds was an American wave of optimism, born of continued progress over the decade, which the Federal Reserve Board transformed into the stock-exchange Mississippi Bubble” (Hoover, 1952). Thus, the common viewpoint was that stock prices were too high.

There is much to criticize in conventional interpretations of the 1929 stock market crash, however. (Even the name is inexact. The largest losses to the market did not come in October 1929 but rather in the following two years.) In December 1929, many expert economists, including Keynes and Irving Fisher, felt that the financial crisis had ended and by April 1930 the Standard and Poor 500 composite index was at 25.92, compared to a 1929 close of 21.45. There are good reasons for thinking that the stock market was not obviously overvalued in 1929 and that it was sensible to hold most stocks in the fall of 1929 and to buy stocks in December 1929 (admittedly this investment strategy would have been terribly unsuccessful).

Were Stocks Obviously Overpriced in October 1929?
Debatable — Economic Indicators Were Strong

From 1925 to the third quarter of 1929, common stocks increased in value by 120 percent in four years, a compound annual growth of 21.8%. While this is a large rate of appreciation, it is not obvious proof of an “orgy of speculation.” The decade of the 1920s was extremely prosperous and the stock market with its rising prices reflected this prosperity as well as the expectation that the prosperity would continue.

The fact that the stock market lost 90 percent of its value from 1929 to 1932 indicates that the market, at least using one criterion (actual performance of the market), was overvalued in 1929. John Kenneth Galbraith (1961) implies that there was a speculative orgy and that the crash was predictable: “Early in 1928, the nature of the boom changed. The mass escape into make-believe, so much a part of the true speculative orgy, started in earnest.” Galbraith had no difficulty in 1961 identifying the end of the boom in 1929: “On the first of January of 1929, as a matter of probability, it was most likely that the boom would end before the year was out.”

Compare this position with the fact that Irving Fisher, one of the leading economists in the U.S. at the time, was heavily invested in stocks and was bullish before and after the October sell offs; he lost his entire wealth (including his house) before stocks started to recover. In England, John Maynard Keynes, possibly the world’s leading economist during the first half of the twentieth century, and an acknowledged master of practical finance, also lost heavily. Paul Samuelson (1979) quotes P. Sergeant Florence (another leading economist): “Keynes may have made his own fortune and that of King’s College, but the investment trust of Keynes and Dennis Robertson managed to lose my fortune in 1929.”

Galbraith’s ability to ‘forecast’ the market turn is not shared by all. Samuelson (1979) admits that: “playing as I often do the experiment of studying price profiles with their dates concealed, I discovered that I would have been caught by the 1929 debacle.” For many, the collapse from 1929 to 1933 was neither foreseeable nor inevitable.

The stock price increases leading to October 1929, were not driven solely by fools or speculators. There were also intelligent, knowledgeable investors who were buying or holding stocks in September and October 1929. Also, leading economists, both then and now, could neither anticipate nor explain the October 1929 decline of the market. Thus, the conviction that stocks were obviously overpriced is somewhat of a myth.

The nation’s total real income rose from 1921 to 1923 by 10.5% per year, and from 1923 to 1929, it rose 3.4% per year. The 1920s were, in fact, a period of real growth and prosperity. For the period of 1923-1929, wholesale prices went down 0.9% per year, reflecting moderate stable growth in the money supply during a period of healthy real growth.

Examining the manufacturing situation in the United States prior to the crash is also informative. Irving Fisher’s Stock Market Crash and After (1930) offers much data indicating that there was real growth in the manufacturing sector. The evidence presented goes a long way to explain Fisher’s optimism regarding the level of stock prices. What Fisher saw was manufacturing efficiency rapidly increasing (output per worker) as was manufacturing output and the use of electricity.

The financial fundamentals of the markets were also strong. During 1928, the price-earnings ratio for 45 industrial stocks increased from approximately 12 to approximately 14. It was over 15 in 1929 for industrials and then decreased to approximately 10 by the end of 1929. While not low, these price-earnings (P/E) ratios were by no means out of line historically. Values in this range would be considered reasonable by most market analysts today. For example, the P/E ratio of the S & P 500 in July 2003 reached a high of 33 and in May 2004 the high was 23.

The rise in stock prices was not uniform across all industries. The stocks that went up the most were in industries where the economic fundamentals indicated there was cause for large amounts of optimism. They included airplanes, agricultural implements, chemicals, department stores, steel, utilities, telephone and telegraph, electrical equipment, oil, paper, and radio. These were reasonable choices for expectations of growth.

To put the P/E ratios of 10 to 15 in perspective, note that government bonds in 1929 yielded 3.4%. Industrial bonds of investment grade were yielding 5.1%. Consider that an interest rate of 5.1% represents a 1/(0.051) = 19.6 price-earnings ratio for debt.

In 1930, the Federal Reserve Bulletin reported production in 1920 at an index of 87.1 The index went down to 67 in 1921, then climbed steadily (except for 1924) until it reached 125 in 1929. This is an annual growth rate in production of 3.1%. During the period commodity prices actually decreased. The production record for the ten-year period was exceptionally good.

Factory payrolls in September were at an index of 111 (an all-time high). In October the index dropped to 110, which beat all previous months and years except for September 1929. The factory employment measures were consistent with the payroll index.

The September unadjusted measure of freight car loadings was at 121 — also an all-time record.2 In October the loadings dropped to 118, which was a performance second only to September’s record measure.

J.W. Kendrick (1961) shows that the period 1919-1929 had an unusually high rate of change in total factor productivity. The annual rate of change of 5.3% for 1919-1929 for the manufacturing sector was more than twice the 2.5% rate of the second best period (1948-1953). Farming productivity change for 1919-1929 was second only to the period 1929-1937. Overall, the period 1919-1929 easily took first place for productivity increases, handily beating the six other time periods studied by Kendrick (all the periods studies were prior to 1961) with an annual productivity change measure of 3.7%. This was outstanding economic performance — performance which normally would justify stock market optimism.

In the first nine months of 1929, 1,436 firms announced increased dividends. In 1928, the number was only 955 and in 1927, it was 755. In September 1929 dividend increased were announced by 193 firms compared with 135 the year before. The financial news from corporations was very positive in September and October 1929.

The May issue of the National City Bank of New York Newsletter indicated the earnings statements for the first quarter of surveyed firms showed a 31% increase compared to the first quarter of 1928. The August issue showed that for 650 firms the increase for the first six months of 1929 compared to 1928 was 24.4%. In September, the results were expanded to 916 firms with a 27.4% increase. The earnings for the third quarter for 638 firms were calculated to be 14.1% larger than for 1928. This is evidence that the general level of business activity and reported profits were excellent at the end of September 1929 and the middle of October 1929.

Barrie Wigmore (1985) researched 1929 financial data for 135 firms. The market price as a percentage of year-end book value was 420% using the high prices and 181% using the low prices. However, the return on equity for the firms (using the year-end book value) was a high 16.5%. The dividend yield was 2.96% using the high stock prices and 5.9% using the low stock prices.

Article after article from January to October in business magazines carried news of outstanding economic performance. E.K. Berger and A.M. Leinbach, two staff writers of the Magazine of Wall Street, wrote in June 1929: “Business so far this year has astonished even the perennial optimists.”

To summarize: There was little hint of a severe weakness in the real economy in the months prior to October 1929. There is a great deal of evidence that in 1929 stock prices were not out of line with the real economics of the firms that had issued the stock. Leading economists were betting that common stocks in the fall of 1929 were a good buy. Conventional financial reports of corporations gave cause for optimism relative to the 1929 earnings of corporations. Price-earnings ratios, dividend amounts and changes in dividends, and earnings and changes in earnings all gave cause for stock price optimism.

Table 1 shows the average of the highs and lows of the Dow Jones Industrial Index for 1922 to 1932.

Table 1
Dow-Jones Industrials Index Average
of Lows and Highs for the Year
1922 91.0
1923 95.6
1924 104.4
1925 137.2
1926 150.9
1927 177.6
1928 245.6
1929 290.0
1930 225.8
1931 134.1
1932 79.4

Sources: 1922-1929 measures are from the Stock Market Study, U.S. Senate, 1955, pp. 40, 49, 110, and 111; 1930-1932 Wigmore, 1985, pp. 637-639.

Using the information of Table 1, from 1922 to 1929 stocks rose in value by 218.7%. This is equivalent to an 18% annual growth rate in value for the seven years. From 1929 to 1932 stocks lost 73% of their value (different indices measured at different time would give different measures of the increase and decrease). The price increases were large, but not beyond comprehension. The price decreases taken to 1932 were consistent with the fact that by 1932 there was a worldwide depression.

If we take the 386 high of September 1929 and the 1929-year end value of 248.5, the market lost 36% of its value during that four-month period. Most of us, if we held stock in September 1929 would not have sold early in October. In fact, if I had money to invest, I would have purchased after the major break on Black Thursday, October 24. (I would have been sorry.)

Events Precipitating the Crash

Although it can be argued that the stock market was not overvalued, there is evidence that many feared that it was overvalued — including the Federal Reserve Board and the United States Senate. By 1929, there were many who felt the market price of equity securities had increased too much, and this feeling was reinforced daily by the media and statements by influential government officials.

What precipitated the October 1929 crash?

My research minimizes several candidates that are frequently cited by others (see Bierman 1991, 1998, 1999, and 2001).

  • The market did not fall just because it was too high — as argued above it is not obvious that it was too high.
  • The actions of the Federal Reserve, while not always wise, cannot be directly identified with the October stock market crashes in an important way.
  • The Smoot-Hawley tariff, while looming on the horizon, was not cited by the news sources in 1929 as a factor, and was probably not important to the October 1929 market.
  • The Hatry Affair in England was not material for the New York Stock Exchange and the timing did not coincide with the October crashes.
  • Business activity news in October was generally good and there were very few hints of a coming depression.
  • Short selling and bear raids were not large enough to move the entire market.
  • Fraud and other illegal or immoral acts were not material, despite the attention they have received.

Barsky and DeLong (1990, p. 280) stress the importance of fundamentals rather than fads or fashions. “Our conclusion is that major decade-to-decade stock market movements arise predominantly from careful re-evaluation of fundamentals and less so from fads or fashions.” The argument below is consistent with their conclusion, but there will be one major exception. In September 1929, the market value of one segment of the market, the public utility sector, should be based on existing fundamentals, and fundamentals seem to have changed considerably in October 1929.

A Look at the Financial Press

Thursday, October 3, 1929, the Washington Post with a page 1 headline exclaimed “Stock Prices Crash in Frantic Selling.” the New York Times of October 4 headed a page 1 article with “Year’s Worst Break Hits Stock Market.” The article on the first page of the Times cited three contributing factors:

  • A large broker loan increase was expected (the article stated that the loans increased, but the increase was not as large as expected).
  • The statement by Philip Snowden, England’s Chancellor of the Exchequer that described America’s stock market as a “speculative orgy.”
  • Weakening of margin accounts making it necessary to sell, which further depressed prices.

While the 1928 and 1929 financial press focused extensively and excessively on broker loans and margin account activity, the statement by Snowden is the only unique relevant news event on October 3. The October 4 (p. 20) issue of the Wall Street Journal also reported the remark by Snowden that there was “a perfect orgy of speculation.” Also, on October 4, the New York Times made another editorial reference to Snowden’s American speculation orgy. It added that “Wall Street had come to recognize its truth.” The editorial also quoted Secretary of the Treasury Mellon that investors “acted as if the price of securities would infinitely advance.” The Times editor obviously thought there was excessive speculation, and agreed with Snowden.

The stock market went down on October 3 and October 4, but almost all reported business news was very optimistic. The primary negative news item was the statement by Snowden regarding the amount of speculation in the American stock market. The market had been subjected to a barrage of statements throughout the year that there was excessive speculation and that the level of stock prices was too high. There is a possibility that the Snowden comment reported on October 3 was the push that started the boulder down the hill, but there were other events that also jeopardized the level of the market.

On August 8, the Federal Reserve Bank of New York had increased the rediscount rate from 5 to 6%. On September 26 the Bank of England raised its discount rate from 5.5 to 6.5%. England was losing gold as a result of investment in the New York Stock Exchange and wanted to decrease this investment. The Hatry Case also happened in September. It was first reported on September 29, 1929. Both the collapse of the Hatry industrial empire and the increase in the investment returns available in England resulted in shrinkage of English investment (especially the financing of broker loans) in the United States, adding to the market instability in the beginning of October.

Wednesday, October 16, 1929

On Wednesday, October 16, stock prices again declined. the Washington Post (October 17, p. 1) reported “Crushing Blow Again Dealt Stock Market.” Remember, the start of the stock market crash is conventionally identified with Black Thursday, October 24, but there were price declines on October 3, 4, and 16.

The news reports of the Post on October 17 and subsequent days are important since they were Associated Press (AP) releases, thus broadly read throughout the country. The Associated Press reported (p. 1) “The index of 20 leading public utilities computed for the Associated Press by the Standard Statistics Co. dropped 19.7 points to 302.4 which contrasts with the year’s high established less than a month ago.” This index had also dropped 18.7 points on October 3 and 4.3 points on October 4. The Times (October 17, p. 38) reported, “The utility stocks suffered most as a group in the day’s break.”

The economic news after the price drops of October 3 and October 4 had been good. But the deluge of bad news regarding public utility regulation seems to have truly upset the market. On Saturday, October 19, the Washington Post headlined (p. 13) “20 Utility Stocks Hit New Low Mark” and (Associated Press) “The utility shares again broke wide open and the general list came tumbling down almost half as far.” The October 20 issue of the Post had another relevant AP article (p. 12) “The selling again concentrated today on the utilities, which were in general depressed to the lowest levels since early July.”

An evaluation of the October 16 break in the New York Times on Sunday, October 20 (pp. 1 and 29) gave the following favorable factors:

  • stable business condition
  • low money rates (5%)
  • good retail trade
  • revival of the bond market
  • buying power of investment trusts
  • largest short interest in history (this is the total dollar value of stock sold where the investors do not own the stock they sold)

The following negative factors were described:

  • undigested investment trusts and new common stock shares
  • increase in broker loans
  • some high stock prices
  • agricultural prices lower
  • nervous market

The negative factors were not very upsetting to an investor if one was optimistic that the real economic boom (business prosperity) would continue. The Times failed to consider the impact on the market of the news concerning the regulation of public utilities.

Monday, October 21, 1929

On Monday, October 21, the market went down again. The Times (October 22) identified the causes to be

  • margin sellers (buyers on margin being forced to sell)
  • foreign money liquidating
  • skillful short selling

The same newspaper carried an article about a talk by Irving Fisher (p. 24) “Fisher says prices of stocks are low.” Fisher also defended investment trusts as offering investors diversification, thus reduced risk. He was reminded by a person attending the talk that in May he had “pointed out that predicting the human behavior of the market was quite different from analyzing its economic soundness.” Fisher was better with fundamentals than market psychology.

Wednesday, October 23, 1929

On Wednesday, October 23 the market tumbled. The Times headlines (October 24, p.1) said “Prices of Stocks Crash in Heavy Liquidation.” The Washington Post (p. 1) had “Huge Selling Wave Creates Near-Panic as Stocks Collapse.” In a total market value of $87 billion the market declined $4 billion — a 4.6% drop. If the events of the next day (Black Thursday) had not occurred, October 23 would have gone down in history as a major stock market event. But October 24 was to make the “Crash” of October 23 become merely a “Dip.”

The Times lamented October 24, (p. 38) “There was hardly a single item of news which might be construed as bearish.”

Thursday, October 24, 1929

Thursday, October 24 (Black Thursday) was a 12,894,650 share day (the previous record was 8,246,742 shares on March 26, 1929) on the NYSE. The headline on page one of the Times (October 25) was “Treasury Officials Blame Speculation.”

The Times (p. 41) moaned that the cost of call money had been 20% in March and the price break in March was understandable. (A call loan is a loan payable on demand of the lender.) Call money on October 24 cost only 5%. There should not have been a crash. The Friday Wall Street Journal (October 25) gave New York bankers credit for stopping the price decline with $1 billion of support.

the Washington Post (October 26, p. 1) reported “Market Drop Fails to Alarm Officials.” The “officials” were all in Washington. The rest of the country seemed alarmed. On October 25, the market gained. President Hoover made a statement on Friday regarding the excellent state of business, but then added how building and construction had been adversely “affected by the high interest rates induced by stock speculation” (New York Times, October 26, p. 1). A Times editorial (p. 16) quoted Snowden’s “orgy of speculation” again.

Tuesday, October 29, 1929

The Sunday, October 27 edition of the Times had a two-column article “Bay State Utilities Face Investigation.” It implied that regulation in Massachusetts was going to be less friendly towards utilities. Stocks again went down on Monday, October 28. There were 9,212,800 shares traded (3,000,000 in the final hour). The Times on Tuesday, October 29 again carried an article on the New York public utility investigating committee being critical of the rate making process. October 29 was “Black Tuesday.” The headline the next day was “Stocks Collapse in 16,410,030 Share Day” (October 30, p. 1). Stocks lost nearly $16 billion in the month of October or 18% of the beginning of the month value. Twenty-nine public utilities (tabulated by the New York Times) lost $5.1 billion in the month, by far the largest loss of any of the industries listed by the Times. The value of the stocks of all public utilities went down by more than $5.1 billion.

An Interpretive Overview of Events and Issues

My interpretation of these events is that the statement by Snowden, Chancellor of the Exchequer, indicating the presence of a speculative orgy in America is likely to have triggered the October 3 break. Public utility stocks had been driven up by an explosion of investment trust formation and investing. The trusts, to a large extent, bought stock on margin with funds loaned not by banks but by “others.” These funds were very sensitive to any market weakness. Public utility regulation was being reviewed by the Federal Trade Commission, New York City, New York State, and Massachusetts, and these reviews were watched by the other regulatory commissions and by investors. The sell-off of utility stocks from October 16 to October 23 weakened prices and created “margin selling” and withdrawal of capital by the nervous “other” money. Then on October 24, the selling panic happened.

There are three topics that require expansion. First, there is the setting of the climate concerning speculation that may have led to the possibility of relatively specific issues being able to trigger a general market decline. Second, there are investment trusts, utility holding companies, and margin buying that seem to have resulted in one sector being very over-levered and overvalued. Third, there are the public utility stocks that appear to be the best candidate as the actual trigger of the crash.

Contemporary Worries of Excessive Speculation

During 1929, the public was bombarded with statements of outrage by public officials regarding the speculative orgy taking place on the New York Stock Exchange. If the media say something often enough, a large percentage of the public may come to believe it. By October 29 the overall opinion was that there had been excessive speculation and the market had been too high. Galbraith (1961), Kindleberger (1978), and Malkiel (1996) all clearly accept this assumption. the Federal Reserve Bulletin of February 1929 states that the Federal Reserve would restrain the use of “credit facilities in aid of the growth of speculative credit.”

In the spring of 1929, the U.S. Senate adopted a resolution stating that the Senate would support legislation “necessary to correct the evil complained of and prevent illegitimate and harmful speculation” (Bierman, 1991).

The President of the Investment Bankers Association of America, Trowbridge Callaway, gave a talk in which he spoke of “the orgy of speculation which clouded the country’s vision.”

Adolph Casper Miller, an outspoken member of the Federal Reserve Board from its beginning described 1929 as “this period of optimism gone wild and cupidity gone drunk.”

Myron C. Taylor, head of U.S. Steel described “the folly of the speculative frenzy that lifted securities to levels far beyond any warrant of supporting profits.”

Herbert Hoover becoming president in March 1929 was a very significant event. He was a good friend and neighbor of Adolph Miller (see above) and Miller reinforced Hoover’s fears. Hoover was an aggressive foe of speculation. For example, he wrote, “I sent individually for the editors and publishers of major newspapers and magazine and requested them systematically to warn the country against speculation and the unduly high price of stocks.” Hoover then pressured Secretary of the Treasury Andrew Mellon and Governor of the Federal Reserve Board Roy Young “to strangle the speculative movement.” In his memoirs (1952) he titled his Chapter 2 “We Attempt to Stop the Orgy of Speculation” reflecting Snowden’s influence.

Buying on Margin

Margin buying during the 1920’s was not controlled by the government. It was controlled by brokers interested in their own well-being. The average margin requirement was 50% of the stock price prior to October 1929. On selected stocks, it was as high as 75%. When the crash came, no major brokerage firm was bankrupted, because the brokers managed their finances in a conservative manner. At the end of October, margins were lowered to 25%.

Brokers’ loans received a lot of attention in England, as they did in the United States. The Financial Times reported the level and the changes in the amount regularly. For example, the October 4 issue indicated that on October 3 broker loans reached a record high as money rates dropped from 7.5% to 6%. By October 9, money rates had dropped further to below .06. Thus, investors prior to October 24 had relatively easy access to funds at the lowest rate since July 1928.

the Financial Times (October 7, 1929, p. 3) reported that the President of the American Bankers Association was concerned about the level of credit for securities and had given a talk in which he stated, “Bankers are gravely alarmed over the mounting volume of credit being employed in carrying security loans, both by brokers and by individuals.” The Financial Times was also concerned with the buying of investment trusts on margin and the lack of credit to support the bull market.

My conclusion is that the margin buying was a likely factor in causing stock prices to go up, but there is no reason to conclude that margin buying triggered the October crash. Once the selling rush began, however, the calling of margin loans probably exacerbated the price declines. (A calling of margin loans requires the stock buyer to contribute more cash to the broker or the broker sells the stock to get the cash.)

Investment Trusts

By 1929, investment trusts were very popular with investors. These trusts were the 1929 version of closed-end mutual funds. In recent years seasoned closed-end mutual funds sell at a discount to their fundamental value. The fundamental value is the sum of the market values of the fund’s components (securities in the portfolio). In 1929, the investment trusts sold at a premium — i.e. higher than the value of the underlying stocks. Malkiel concludes (p. 51) that this “provides clinching evidence of wide-scale stock-market irrationality during the 1920s.” However, Malkiel also notes (p. 442) “as of the mid-1990’s, Berkshire Hathaway shares were selling at a hefty premium over the value of assets it owned.” Warren Buffett is the guiding force behind Berkshire Hathaway’s great success as an investor. If we were to conclude that rational investors would currently pay a premium for Warren Buffet’s expertise, then we should reject a conclusion that the 1929 market was obviously irrational. We have current evidence that rational investors will pay a premium for what they consider to be superior money management skills.

There were $1 billion of investment trusts sold to investors in the first eight months of 1929 compared to $400 million in the entire 1928. the Economist reported that this was important (October 12, 1929, p. 665). “Much of the recent increase is to be accounted for by the extraordinary burst of investment trust financing.” In September alone $643 million was invested in investment trusts (Financial Times, October 21, p. 3). While the two sets of numbers (from the Economist and the Financial Times) are not exactly comparable, both sets of numbers indicate that investment trusts had become very popular by October 1929.

The common stocks of trusts that had used debt or preferred stock leverage were particularly vulnerable to the stock price declines. For example, the Goldman Sachs Trading Corporation was highly levered with preferred stock and the value of its common stock fell from $104 a share to less than $3 in 1933. Many of the trusts were levered, but the leverage of choice was not debt but rather preferred stock.

In concept, investment trusts were sensible. They offered expert management and diversification. Unfortunately, in 1929 a diversification of stocks was not going to be a big help given the universal price declines. Irving Fisher on September 6, 1929 was quoted in the New York Herald Tribune as stating: “The present high levels of stock prices and corresponding low levels of dividend returns are due largely to two factors. One, the anticipation of large dividend returns in the immediate future; and two, reduction of risk to investors largely brought about through investment diversification made possible for the investor by investment trusts.”

If a researcher could find out the composition of the portfolio of a couple of dozen of the largest investment trusts as of September-October 1929 this would be extremely helpful. Seven important types of information that are not readily available but would be of interest are:

  • The percentage of the portfolio that was public utilities.
  • The extent of diversification.
  • The percentage of the portfolios that was NYSE firms.
  • The investment turnover.
  • The ratio of market price to net asset value at various points in time.
  • The amount of debt and preferred stock leverage used.
  • Who bought the trusts and how long they held.

The ideal information to establish whether market prices are excessively high compared to intrinsic values is to have both the prices and well-defined intrinsic values at the same moment in time. For the normal financial security, this is impossible since the intrinsic values are not objectively well defined. There are two exceptions. DeLong and Schleifer (1991) followed one path, very cleverly choosing to study closed-end mutual funds. Some of these funds were traded on the stock market and the market values of the securities in the funds’ portfolios are a very reasonable estimate of the intrinsic value. DeLong and Schleifer state (1991, p. 675):

“We use the difference between prices and net asset values of closed-end mutual funds at the end of the 1920s to estimate the degree to which the stock market was overvalued on the eve of the 1929 crash. We conclude that the stocks making up the S&P composite were priced at least 30 percent above fundamentals in late summer, 1929.”

Unfortunately (p. 682) “portfolios were rarely published and net asset values rarely calculated.” It was only after the crash that investment trusts started to reveal routinely their net asset value. In the third quarter of 1929 (p. 682), “three types of event seemed to trigger a closed-end fund’s publication of its portfolio.” The three events were (1) listing on the New York Stock Exchange (most of the trusts were not listed), (2) start up of a new closed-end fund (this stock price reflects selling pressure), and (3) shares selling at a discount from net asset value (in September 1929 most trusts were not selling at a discount, the inclusion of any that were introduces a bias). After 1929, some trusts revealed 1929 net asset values. Thus, DeLong and Schleifer lacked the amount and quality of information that would have allowed definite conclusions. In fact, if investors also lacked the information regarding the portfolio composition we would have to place investment trusts in a unique investment category where investment decisions were made without reliable financial statements. If investors in the third quarter of 1929 did not know the current net asset value of investment trusts, this fact is significant.

The closed-end funds were an attractive vehicle to study since the market for investment trusts in 1929 was large and growing rapidly. In August and September alone over $1 billion of new funds were launched. DeLong and Schleifer found the premiums of price over value to be large — the median was about 50% in the third quarter of 1929) (p. 678). But they worried about the validity of their study because funds were not selected randomly.

DeLong and Schleifer had limited data (pp. 698-699). For example, for September 1929 there were two observations, for August 1929 there were five, and for July there were nine. The nine funds observed in July 1929 had the following premia: 277%, 152%, 48%, 22%, 18% (2 times), 8% (3 times). Given that closed-end funds tend to sell at a discount, the positive premiums are interesting. Given the conventional perspective in 1929 that financial experts could manager money better than the person not plugged into the street, it is not surprising that some investors were willing to pay for expertise and to buy shares in investment trusts. Thus, a premium for investment trusts does not imply the same premium for other stocks.

The Public Utility Sector

In addition to investment trusts, intrinsic values are usually well defined for regulated public utilities. The general rule applied by regulatory authorities is to allow utilities to earn a “fair return” on an allowed rate base. The fair return is defined to be equal to a utility’s weighted average cost of capital. There are several reasons why a public utility can earn more or less than a fair return, but the target set by the regulatory authority is the weighted average cost of capital.

Thus, if a utility has an allowed rate equity base of $X and is allowed to earn a return of r, (rX in terms of dollars) after one year the firm’s equity will be worth X + rX or (1 + r)X with a present value of X. (This assumes that r is the return required by the market as well as the return allowed by regulators.) Thus, the present value of the equity is equal to the present rate base, and the stock price should be equal to the rate base per share. Given the nature of public utility accounting, the book value of a utility’s stock is approximately equal to the rate base.

There can be time periods where the utility can earn more (or less) than the allowed return. The reasons for this include regulatory lag, changes in efficiency, changes in the weather, and changes in the mix and number of customers. Also, the cost of equity may be different than the allowed return because of inaccurate (or incorrect) or changing capital market conditions. Thus, the stock price may differ from the book value, but one would not expect the stock price to be very much different than the book value per share for very long. There should be a tendency for the stock price to revert to the book value for a public utility supplying an essential service where there is no effective competition, and the rate commission is effectively allowing a fair return to be earned.

In 1929, public utility stock prices were in excess of three times their book values. Consider, for example, the following measures (Wigmore, 1985, p. 39) for five operating utilities.

border=”1″ cellspacing=”0″ cellpadding=”2″ class=”encyclopedia” width=”580″>

1929 Price-earnings Ratio

High Price for Year

Market Price/Book Value

Commonwealth Edison

35

3.31

Consolidated Gas of New York

39

3.34

Detroit Edison

35

3.06

Pacific Gas & Electric

28

3.30

Public Service of New Jersey

35

3.14

Sooner or later this price bubble had to break unless the regulatory authorities were to decide to allow the utilities to earn more than a fair return, or an infinite stream of greater fools existed. The decision made by the Massachusetts Public Utility Commission in October 1929 applicable to the Edison Electric Illuminating Company of Boston made clear that neither of these improbable events were going to happen (see below).

The utilities bubble did burst. Between the end of September and the end of November 1929, industrial stocks fell by 48%, railroads by 32% and utilities by 55% — thus utilities dropped the furthest from the highs. A comparison of the beginning of the year prices and the highest prices is also of interest: industrials rose by 20%, railroads by 19%, and utilities by 48%. The growth in value for utilities during the first nine months of 1929 was more than twice that of the other two groups.

The following high and low prices for 1929 for a typical set of public utilities and holding companies illustrate how severely public utility prices were hit by the crash (New York Times, 1 January 1930 quotations.)

1929
Firm High Price Low Price Low Price DividedBy High Price
American Power & Light 1753/8 641/4 .37
American Superpower 711/8 15 .21
Brooklyn Gas 2481/2 99 .44
Buffalo, Niagara & Eastern Power 128 611/8 .48
Cities Service 681/8 20 .29
Consolidated Gas Co. of N.Y. 1831/4 801/8 .44
Electric Bond and Share 189 50 .26
Long Island Lighting 91 40 .44
Niagara Hudson Power 303/4 111/4 .37
Transamerica 673/8 201/4 .30

Picking on one segment of the market as the cause of a general break in the market is not obviously correct. But the combination of an overpriced utility segment and investment trusts with a portion of the market that had purchased on margin appears to be a viable explanation. In addition, as of September 1, 1929 utilities industry represented $14.8 billion of value or 18% of the value of the outstanding shares on the NYSE. Thus, they were a large sector, capable of exerting a powerful influence on the overall market. Moreover, many contemporaries pointed to the utility sector as an important force in triggering the market decline.

The October 19, 1929 issue of the Commercial and Financial Chronicle identified the main depressing influences on the market to be the indications of a recession in steel and the refusal of the Massachusetts Department of Public Utilities to allow Edison Electric Illuminating Company of Boston to split its stock. The explanations offered by the Department — that the stock was not worth its price and the company’s dividend would have to be reduced — made the situation worse.

the Washington Post (October 17, p. 1) in explaining the October 16 market declines (an Associated Press release) reported, “Professional traders also were obviously distressed at the printed remarks regarding inflation of power and light securities by the Massachusetts Public Utility Commission in its recent decision.”

Straws That Broke the Camel’s Back?

Edison Electric of Boston

On August 2, 1929, the New York Times reported that the Directors of the Edison Electric Illuminating Company of Boston had called a meeting of stockholders to obtain authorization for a stock split. The stock went up to a high of $440. Its book value was $164 (the ratio of price to book value was 2.6, which was less than many other utilities).

On Saturday (October 12, p. 27) the Times reported that on Friday the Massachusetts Department of Public Utilities has rejected the stock split. The heading said “Bars Stock Split by Boston Edison. Criticizes Dividend Policy. Holds Rates Should Not Be Raised Until Company Can Reduce Charge for Electricity.” Boston Edison lost 15 points for the day even though the decision was released after the Friday closing. The high for the year was $440 and the stock closed at $360 on Friday.

The Massachusetts Department of Public Utilities (New York Times, October 12, p. 27) did not want to imply to investors that this was the “forerunner of substantial increases in dividends.” They stated that the expectation of increased dividends was not justified, offered “scathing criticisms of the company” (October 16, p. 42) and concluded “the public will take over such utilities as try to gobble up all profits available.”

On October 15, the Boston City Council advised the mayor to initiate legislation for public ownership of Edison, on October 16, the Department announced it would investigate the level of rates being charged by Edison, and on October 19, it set the dates for the inquiry. On Tuesday, October 15 (p. 41), there was a discussion in the Times of the Massachusetts decision in the column “Topic in Wall Street.” It “excited intense interest in public utility circles yesterday and undoubtedly had effect in depressing the issues of this group. The decision is a far-reaching one and Wall Street expressed the greatest interest in what effect it will have, if any, upon commissions in other States.”

Boston Edison had closed at 360 on Friday, October 11, before the announcement was released. It dropped 61 points at its low on Monday, (October 14) but closed at 328, a loss of 32 points.

On October 16 (p. 42), the Times reported that Governor Allen of Massachusetts was launching a full investigation of Boston Edison including “dividends, depreciation, and surplus.”

One major factor that can be identified leading to the price break for public utilities was the ruling by the Massachusetts Public Utility Commission. The only specific action was that it refused to permit Edison Electric Illuminating Company of Boston to split its stock. Standard financial theory predicts that the primary effect of a stock split would be to reduce the stock price by 50% and would leave the total value unchanged, thus the denial of the split was not economically significant, and the stock split should have been easy to grant. But the Commission made it clear it had additional messages to communicate. For example, the Financial Times (October 16, 1929, p. 7) reported that the Commission advised the company to “reduce the selling price to the consumer.” Boston was paying $.085 per kilowatt-hour and Cambridge only $.055. There were also rumors of public ownership and a shifting of control. The next day (October 17), the Times reported (p. 3) “The worst pressure was against Public Utility shares” and the headline read “Electric Issue Hard Hit.”

Public Utility Regulation in New York

Massachusetts was not alone in challenging the profit levels of utilities. The Federal Trade Commission, New York City, and New York State were all challenging the status of public utility regulation. New York Governor (Franklin D. Roosevelt) appointed a committee on October 8 to investigate the regulation of public utilities in the state. The Committee stated, “this inquiry is likely to have far-reaching effects and may lead to similar action in other States.” Both the October 17 and October 19 issues of the Times carried articles regarding the New York investigative committee. Professor Bonbright, a Roosevelt appointee, described the regulatory process as a “vicious system” (October 19, p. 21), which ignored consumers. The Chairman of the Public Service Commission, testifying before the Committee wanted more control over utility holding companies, especially management fees and other transfers.

The New York State Committee also noted the increasing importance of investment trusts: “mention of the influence of the investment trust on utility securities is too important for this committee to ignore” (New York Times, October 17, p. 18). They conjectured that the trusts had $3.5 billion to invest, and “their influence has become very important” (p. 18).

In New York City Mayor Jimmy Walker was fighting the accusation of graft charges with statements that his administration would fight aggressively against rate increases, thus proving that he had not accepted bribes (New York Times, October 23). It is reasonable to conclude that the October 16 break was related to the news from Massachusetts and New York.

On October 17, the New York Times (p. 18) reported that the Committee on Public Service Securities of the Investment Banking Association warned against “speculative and uniformed buying.” The Committee published a report in which it asked for care in buying shares in utilities.

On Black Thursday, October 24, the market panic began. The market dropped from 305.87 to 272.32 (a 34 point drop, or 9%) and closed at 299.47. The declines were led by the motor stocks and public utilities.

The Public Utility Multipliers and Leverage

Public utilities were a very important segment of the stock market, and even more importantly, any change in public utility stock values resulted in larger changes in equity wealth. In 1929, there were three potentially important multipliers that meant that any change in a public utility’s underlying value would result in a larger value change in the market and in the investor’s value.

Consider the following hypothetical values for a public utility:

Book value per share for a utility $50

Market price per share 162.502

Market price of investment trust holding stock (assuming a 100% 325.00

premium over market value)

Eliminating the utility’s $112.50 market price premium over book value, the market price of the investment trust would be $50 without a premium. The loss in market value of the stock of the investment trust and the utility would be $387.50 (with no premium). The $387.50 is equal to the $112.50 loss in underlying stock value and the $275 reduction in investment trust stock value. The public utility holding companies, in fact, were even more vulnerable to a stock price change since their ratio of price to book value averaged 4.44 (Wigmore, p. 43). The $387.50 loss in market value implies investments in both the firm’s stock and the investment trust.

For simplicity, this discussion has assumed the trust held all the holding company stock. The effects shown would be reduced if the trust held only a fraction of the stock. However, this discussion has also assumed that no debt or margin was used to finance the investment. Assume the individual investors invested only $162.50 of their money and borrowed $162.50 to buy the investment trust stock costing $325. If the utility stock went down from $162.50 to $50 and the trust still sold at a 100% premium, the trust would sell at $100 and the investors would have lost 100% of their investment since the investors owe $162.50. The vulnerability of the margin investor buying a trust stock that has invested in a utility is obvious.

These highly levered non-operating utilities offered an opportunity for speculation. The holding company typically owned 100% of the operating companies’ stock and both entities were levered (there could be more than two levels of leverage). There were also holding companies that owned holding companies (e.g., Ebasco). Wigmore (p. 43) lists nine of the largest public utility holding companies. The ratio of the low 1929 price to the high price (average) was 33%. These stocks were even more volatile than the publicly owned utilities.

The amount of leverage (both debt and preferred stock) used in the utility sector may have been enormous, but we cannot tell for certain. Assume that a utility purchases an asset that costs $1,000,000 and that asset is financed with 40% stock ($400,000). A utility holding company owns the utility stock and is also financed with 40% stock ($160,000). A second utility holding company owns the first and it is financed with 40% stock ($64,000). An investment trust owns the second holding company’s stock and is financed with 40% stock ($25,600). An investor buys the investment trust’s common stock using 50% margin and investing $12,800 in the stock. Thus, the $1,000,000 utility asset is financed with $12,800 of equity capital.

When the large amount of leverage is combined with the inflated prices of the public utility stock, both holding company stocks, and the investment trust the problem is even more dramatic. Continuing the above example, assume the $1,000,000 asset again financed with $600,000 of debt and $400,000 common stock, but the common stock has a $1,200,000 market value. The first utility holding company has $720,000 of debt and $480,000 of common. The second holding company has $288,000 of debt and $192,000 of stock. The investment trust has $115,200 of debt and $76,800 of stock. The investor uses $38,400 of margin debt. The $1,000,000 asset is supporting $1,761,600 of debt. The investor’s $38,400 of equity is very much in jeopardy.

Conclusions and Lessons

Although no consensus has been reached on the causes of the 1929 stock market crash, the evidence cited above suggests that it may have been that the fear of speculation helped push the stock market to the brink of collapse. It is possible that Hoover’s aggressive campaign against speculation, helped by the overpriced public utilities hit by the Massachusetts Public Utility Commission decision and statements and the vulnerable margin investors, triggered the October selling panic and the consequences that followed.

An important first event may have been Lord Snowden’s reference to the speculative orgy in America. The resulting decline in stock prices weakened margin positions. When several governmental bodies indicated that public utilities in the future were not going to be able to justify their market prices, the decreases in utility stock prices resulted in margin positions being further weakened resulting in general selling. At some stage, the selling panic started and the crash resulted.

What can we learn from the 1929 crash? There are many lessons, but a handful seem to be most applicable to today’s stock market.

  • There is a delicate balance between optimism and pessimism regarding the stock market. Statements and actions by government officials can affect the sensitivity of stock prices to events. Call a market overpriced often enough, and investors may begin to believe it.
  • The fact that stocks can lose 40% of their value in a month and 90% over three years suggests the desirability of diversification (including assets other than stocks). Remember, some investors lose all of their investment when the market falls 40%.
  • A levered investment portfolio amplifies the swings of the stock market. Some investment securities have leverage built into them (e.g., stocks of highly levered firms, options, and stock index futures).
  • A series of presumably undramatic events may establish a setting for a wide price decline.
  • A segment of the market can experience bad news and a price decline that infects the broader market. In 1929, it seems to have been public utilities. In 2000, high technology firms were candidates.
  • Interpreting events and assigning blame is unreliable if there has not been an adequate passage of time and opportunity for reflection and analysis — and is difficult even with decades of hindsight.
  • It is difficult to predict a major market turn with any degree of reliability. It is impressive that in September 1929, Roger Babson predicted the collapse of the stock market, but he had been predicting a collapse for many years. Also, even Babson recommended diversification and was against complete liquidation of stock investments (Financial Chronicle, September 7, 1929, p. 1505).
  • Even a market that is not excessively high can collapse. Both market psychology and the underlying economics are relevant.

References

Barsky, Robert B. and J. Bradford DeLong. “Bull and Bear Markets in the Twentieth Century,” Journal of Economic History 50, no. 2 (1990): 265-281.

Bierman, Harold, Jr. The Great Myths of 1929 and the Lessons to be Learned. Westport, CT: Greenwood Press, 1991.

Bierman, Harold, Jr. The Causes of the 1929 Stock Market Crash. Westport, CT, Greenwood Press, 1998.

Bierman, Harold, Jr. “The Reasons Stock Crashed in 1929.” Journal of Investing (1999): 11-18.

Bierman, Harold, Jr. “Bad Market Days,” World Economics (2001) 177-191.

Commercial and Financial Chronicle, 1929 issues.

Committee on Banking and Currency. Hearings on Performance of the National and Federal Reserve Banking System. Washington, 1931.

DeLong, J. Bradford and Andrei Schleifer, “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Federal Reserve Bulletin, February, 1929.

Fisher, Irving. The Stock Market Crash and After. New York: Macmillan, 1930.

Galbraith, John K. The Great Crash, 1929. Boston, Houghton Mifflin, 1961.

Hoover, Herbert. The Memoirs of Herbert Hoover. New York, Macmillan, 1952.

Kendrick, John W. Productivity Trends in the United States. Princeton University Press, 1961.

Kindleberger, Charles P. Manias, Panics, and Crashes. New York, Basic Books, 1978.

Malkiel, Burton G., A Random Walk Down Wall Street. New York, Norton, 1975 and 1996.

Moggridge, Donald. The Collected Writings of John Maynard Keynes, Volume XX. New York: Macmillan, 1981.

New York Times, 1929 and 1930.

Rappoport, Peter and Eugene N. White, “Was There a Bubble in the 1929 Stock Market?” Journal of Economic History 53, no. 3 (1993): 549-574.

Samuelson, Paul A. “Myths and Realities about the Crash and Depression.” Journal of Portfolio Management (1979): 9.

Senate Committee on Banking and Currency. Stock Exchange Practices. Washington, 1928.

Siegel, Jeremy J. “The Equity Premium: Stock and Bond Returns since 1802,”

Financial Analysts Journal 48, no. 1 (1992): 28-46.

Wall Street Journal, October 1929.

Washington Post, October 1929.

Wigmore, Barry A. The Crash and Its Aftermath: A History of Securities Markets in the United States, 1929-1933. Greenwood Press, Westport, 1985.

1 1923-25 average = 100.

2 Based a price to book value ratio of 3.25 (Wigmore, p. 39).

Citation: Bierman, Harold. “The 1929 Stock Market Crash”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/the-1929-stock-market-crash/

A History of the Standard of Living in the United States

Richard H. Steckel, Ohio State University

Methods of Measuring the Standard of Living

During many years of teaching, I have introduced the topic of the standard of living by asking students to pretend that they would be born again to unknown (random) parents in a country they could choose based on three of its characteristics. The list put forward in the classroom invariably includes many of the categories usually suggested by scholars who have studied the standard of living over the centuries: access to material goods and services; health; socio-economic fluidity; education; inequality; the extent of political and religious freedom; and climate. Thus, there is little disagreement among people, whether newcomers or professionals, on the relevant categories of social performance.

Components and Weights

Significant differences of opinion emerge, both among students and research specialists, on the precise measures to be used within each category and on the weights or relative importance that should be attached to each. There are numerous ways to measure health, for example, with some approaches emphasizing length of life while other people give high priority to morbidity (illness or disability) or to yet other aspects of health quality of life while living (e.g. physical fitness). Conceivably one might attempt comparisons using all feasible measures, but this is expensive and time-consuming and in any event, many good measures within categories are often highly correlated.

Weighting the various components is the most contentious issue in any attempt to summarize the standard of living, or otherwise compress diverse measures into a single number. Some people give high priority to income, for example, while others claim that health is most important. Economists and other social scientists recognize that tastes or preferences are individualistic and diverse, and following this logic to the extreme, one might argue that all interpersonal comparisons are invalid. On the other hand, there are general tendencies in preferences. Every class that I have taught has emphasized the importance of income and health, and for this reason I discuss historical evidence on these measures.

Material Aspects of the Standard of Living

Gross Domestic Product

The most widely used measure of the material standard of living is Gross Domestic Product (GDP) per capita, adjusted for changes in the price level (inflation or deflation). This measure, real GDP per capita, reflects only economic activities that flow through markets, omitting productive endeavors unrecorded in market exchanges, such a preparing meals at home or maintenance done by the homeowner. It ignores work effort required to produce income and does not consider conditions surrounding the work environment, which might affect health and safety. Crime, pollution, and congestion, which many people consider important to their quality of life, are also excluded from GDP. Moreover technological change, relative prices and tastes affect the course of GDP and the products and services that it includes, which creates what economists call an “index number” problem that is not readily solvable. Nevertheless most economists believe that real GDP per capita does summarize or otherwise quantify important aspects of the average availability of goods and services.

Time Trends in Real GDP per Capita

Table 1 shows the course of the material standard of living in the United States from 1820 to 1998. Over this period of 178 years real GDP per capita increased 21.7 fold, or an average of 1.73 percent per year. Although the evidence available to estimate GDP directly is meager, this rate of increase was probably many times higher than experienced during the colonial period. This conclusion is justified by considering the implications of extrapolating the level observed in 1820 ($1,257) backward in time at the growth rate measured since 1820 (1.73 percent). Under this supposition, real per capita GDP would have doubled every forty years (halved every forty years going backward in time) and so by the mid 1700s there would have been insufficient income to support life. Because the cheapest diet able to sustain good health would have cost nearly $500 per year, the tentative assumption of modern economic growth contradicts what actually happened. Moreover, historical evidence suggests that important ingredients of modern economic growth, such as technological change and human and physical capital, accumulated relatively slowly during the colonial period.

Table 1: GDP per Capita in the United States

Year GDP per capitaa Annual growth rate from previous period
1820 1,257
1870 2,445 1.34
1913 5,301 1.82
1950 9,561 1.61
1973 16,689 2.45
1990 23,214 1.94
1998 27,331 2.04

a. Measured in 1990 international dollars.

Source: Maddison (2001), Tables A-1c and A-1d.

Cycles in Real GDP per Capita

Although real GDP per capita is given for only 7 dates in Table 1, it is apparent that economic progress has been uneven over time. If annual or quarterly data were given, it would show that business cycles have been a major feature of the economic landscape since industrialization began in the 1820s. By far the worst downturn in U.S. history occurred during the Great Depression of the 1930s, when real per capita GDP declined by approximately one-third and the unemployment rate reached 25 percent.

Regional Differences

The aggregate numbers also disguise regional differences in the standard of living. In 1840 personal income per capita was twice as high in the Northeast as in the North Central States. Regional divergence increased after the Civil War when the South Atlantic became the nation’s poorest region, attaining a level only one-third of that in the Northeast. Regional convergence occurred in the twentieth century and industrialization in the South significantly improved the region’s economic standing after World War II.

Health and the Standard of Living

Life Expectancy

Two measures of health are widely used in economic history: life expectancy at birth (or average length of life) and average height, which measures nutritional conditions during the growing years. Table 2 shows that life expectancy approximately doubled over the past century and a half, reaching 76.7 years in 1998. If depressions and recessions have adversely affected the material standard of living, epidemics have been a major cause of sudden declines in health in the past. Fluctuations during the nineteenth century are evident from the table, but as a rule growth rates in health have been considerably less volatile than those for GDP, particularly during the twentieth century.

Table 2: Life Expectancy at Birth in the United States

Year Life Expectancy
1850 38.3
1860 41.8
1870 44.0
1880 39.4
1890 45.2
1900 47.8
1910 53.1
1920 54.1
1930 59.7
1940 62.9
1950 68.2
1960 69.7
1970 70.8
1980 73.7
1990 75.4
1998 76.7

Source: Haines (2002)

Childhood mortality greatly affects life expectancy, which was low in the mid 1800s substantially because mortality rates were very high for this age group. For example, roughly one child in five born alive in 1850 did not survive to age one, but today the infant mortality rate is under one percent. The past century and a half witnessed a significant shift in deaths from early childhood to old age. At the same time, the major causes of death have shifted from infectious diseases originating with germs or microorganisms to degenerative processes that are affected by life-style choices such as diet, smoking and exercise.

The largest gains were concentrated in the first half of the twentieth century, when life expectancy increased from 47.8 years in 1900 to 68.2 years in 1950. Factors behind the growing longevity include the ascent of the germ theory of disease, programs of public health and personal hygiene, better medical technology, higher incomes, better diets, more education, and the emergence of health insurance.

Explanations of Increases in Life Expectancy

Numerous important medical developments contributed to improving health. The research of Pasteur and Koch was particularly influential in leading to acceptance of the germ theory in the late 1800s. Prior to their work, many diseases were thought to have arisen from miasmas or vapors created by rotting vegetation. Thus, swamps were accurately viewed as unhealthy, but not because they were home to mosquitoes and malaria. The germ theory gave public health measures a sound scientific basis, and shortly thereafter cities began cost-effective measures to remove garbage, purify water supplies, and process sewage. The notion that “cleanliness is next to Godliness” also emerged in the home, where bathing and the washing of clothes, dishes, and floors became routine.

The discovery of Salvarsan in 1910 was the first use of an antibiotic (for syphilis), which meant that the drug was effective in altering the course of a disease. This was an important medical event, but broad-spectrum antibiotics were not available until the middle of the century. The most famous of these early drugs was penicillin, which was not manufactured in large quantities until the 1940s. Much of the gain in life expectancy was attained before chemotherapy and a host of other medical technologies were widely available. A cornerstone of improving health from the late 1800s to the middle of the twentieth century was therefore prevention of disease by reducing exposure to pathogens. Also important were improvements in immune systems created by better diets and by vaccination against diseases such as smallpox and diphtheria.

Heights

In the past quarter century, historians have increasingly used average heights to assess health aspects of the standard of living. Average height is a good proxy for the nutritional status of a population because height at a particular age reflects an individual’s history of net nutrition, or diet minus claims on the diet made by work (or physical activity) and disease. The growth of poorly nourished children may cease, and repeated bouts of biological stress — whether from food deprivation, hard work, or disease — often leads to stunting or a reduction in adult height. The average heights of children and of adults in countries around the world are highly correlated with their life expectancy at birth and with the log of the per capita GDP in the country where they live.

This interpretation for average height has led to their use in studying the health of slaves, health inequality, living standards during industrialization, and trends in mortality. The first important results in the “new anthropometric history” dealt with the nutrition and health of Americans slaves as determined from stature recorded for identification purposes on slave manifests required in the coastwise slave trade. The subject of slave health has been a contentious issue among historians, in part because vital statistics and nutrition information were never systematically collected for slaves (or for the vast majority of the American population in the mid-nineteenth century, for that matter). Yet, the height data showed that children were astonishingly small and malnourished while working slaves were remarkably well fed. Adolescent slaves grew rapidly as teenagers and were reasonably well off in nutritional aspects of health.

Time Trends in Average Height

Table 3 shows the time pattern in height of native-born American men obtained in historical periods from military muster rolls, and for men and women in recent decades from the National Health and Nutrition Examination Surveys. This historical trend is notable for the tall stature during the colonial period, the mid-nineteenth century decline, and the surge in heights of the past century. Comparisons of average heights from military organizations in Europe show that Americans were taller by two to three inches. Behind this achievement were a relatively good diet, little exposure to epidemic disease, and relative equality in the distribution of wealth. Americans could choose their foods from the best of European and Western Hemisphere plants and animals, and this dietary diversity combined with favorable weather meant that Americans never had to contend with harvest failures. Thus, even the poor were reasonably well fed in colonial America.

Table 3:

Average Height of Native-Born American Men and Women by Year of Birth

Centimeters

Inches

Year Men Men Women
1710 171.5 67.5
1720 171.8 67.6
1730 172.1 67.8
1740 172.1 67.8
1750 172.2 67.8
1760 172.3 67.8
1770 172.8 68.0
1780 173.2 68.2
1790 172.9 68.1
1800 172.9 68.1
1810 173.0 68.1
1820 172.9 68.1
1830 173.5 68.3
1840 172.2 67.8
1850 171.1 67.4
1860 170.6 67.2
1870 171.2 67.4
1880 169.5 66.7
1890 169.1 66.6
1900 170.0 66.9
1910 172.1 67.8
1920 173.1 68.1
1930 175.8 162.6 69.2 64.0
1940 176.7 163.1 69.6 64.2
1950 177.3 163.1 69.8 64.2
1960 177.9 164.2 70.0 64.6
1970 177.4 163.6 69.8 64.4

Source: Steckel (2002) and sources therein.

Explaining Height Cycles

Loss of stature began in the second quarter of the nineteenth century when the transportation revolution of canals, steamboats and railways brought people into greater contact with diseases. The rise of public schools meant that children were newly exposed to major diseases such as whooping cough, diphtheria, and scarlet fever. Food prices also rose during the 1830s and growing inequality in the distribution of income or wealth accompanied industrialization. Business depressions, which were most hazardous for the health of those who were already poor, also emerged with industrialization. The Civil War of the 1860s and its troop movements further spread disease and disrupted food production and distribution. A large volume of immigration also brought new varieties of disease to the United States at a time when urbanization brought a growing proportion of the population into closer contact with contagious diseases. Estimates of life expectancy among adults at ages 20, 30 and 50, which was assembled from family histories, also declined in the middle of the nineteenth century.

Rapid Increases in Heights in the First Half of the Twentieth Century

In the twentieth century, heights grew most rapidly for those born between 1910 and 1950, an era when public health and personal hygiene measures took vigorous hold, incomes rose rapidly and there was reduced congestion in housing. The latter part of the era also witnessed a larger share of income or wealth going to the lower portion of the distribution, implying that the incomes of the less well-off were rising relatively rapidly. Note that most of the rise in heights occurred before modern antibiotics were available, which means that disease prevention rather than the ability to alter its course after onset, was the most important basis of improving health. The growing control that humans have exercised over their environment, particularly increased food supply and reduced exposure to disease, may be leading to biological (but not genetic) evolution of humans with more durable vital organ systems, larger body size, and later onset of chronic diseases.

Recent Stagnation

Between the middle of the twentieth century and the present, however, the average heights of American men have stagnated, increasing by only a small fraction of an inch over the past half century. Table 3 refers to the native born, so recent increases in immigration cannot account for the stagnation. In the absence of other information, one might be tempted to suppose that environmental conditions for growth are so good that most Americans have simply reached their genetic potential for growth. Unlike the United States, heights and life expectancy have continued to grow in Europe, which has the same genetic stock from which most Americans descend. By the 1970s several American health indicators had fallen behind those in Norway, Sweden, the Netherlands, and Denmark. While American heights were essentially flat after the 1970s, heights continued to grow significantly in Europe. The Dutch men are now the tallest, averaging six feet, about two inches more than American men. Lagging heights leads to questions about the adequacy of health care and life-style choices in America. As discussed below, it is doubtful that lack of resource commitment to health care is the problem because America invests far more than the Netherlands. Greater inequality and less access to health care could be important factors in the difference. But access to health care alone, whether due to low income or lack of insurance coverage, may not be the only issues — health insurance coverage must be used regularly and wisely. In this regard, Dutch mothers are known for regular pre-and post-natal checkups, which are important for early childhood health.

Note that significant differences in health and the quality of life follow from these height patterns. The comparisons are not part of an odd contest that emphasizes height, nor is big per se assumed to be beautiful. Instead, we know that on average, stunted growth has functional implications for longevity, cognitive development, and work capacity. Children who fail to grow adequately are often sick, suffer learning impairments and have a lower quality of life. Growth failure in childhood has a long reach into adulthood because individuals whose growth has been stunted are at greater risk of death from heart disease, diabetes, and some types of cancer. Therefore it is important to know why Americans are falling behind.

International Comparisons

Per capita GDP

Table 4 places American economic performance in perspective relative to other countries. In 1820 the United States was fifth in world rankings, falling roughly thirty percent below the leaders (United Kingdom and the Netherlands), but still two-to-three times better off than the poorest sections of the globe. It is notable that in 1820 the richest country (the Netherlands at $1,821) was approximately 4.4 times better off than the poorest (Africa at $418) but by 1950 the ratio of richest-to-poorest had widened to 21.8 ($9,561 in the United States versus $439 in China), which is roughly the level it is today (in 1998, it was $27,331 in the United States versus $1,368 in Africa). These calculations understate the growing disparity in the material standard of living because several African countries today fall significantly below the average, whereas it is unlikely that they did so in 1820 because GDP for the continent as a whole was close to the level of subsistence.

Table 4: GDP per Capita by Country and Year (1990 International $)

Country 1820 1870 1913 1950 1973 1998 Ratio 1998 to 1820
Austria 1,218 1,863 3,465 3,706 11,235 18,905 15.5
Belgium 1,319 2,697 4,220 5,462 12,170 19,442 14.7
Denmark 1,274 2,003 3,912 6,946 13,945 22,123 17.4
Finland 781 1,140 2,111 4,253 11,085 18,324 23.5
France 1,230 1,876 3,485 5,270 13,123 19,558 15.9
Germany 1,058 1,821 3,648 3,881 11,966 17,799 16.8
Italy 1,117 1,499 2,564 3,502 10,643 17,759 15.9
Netherlands 1,821 2,753 4,049 5,996 13,082 20,224 11.1
Norway 1,104 1,432 2,501 5,463 11,246 23,660 21.4
Sweden 1,198 1,664 3,096 6,738 13,493 18,685 15.6
Switzerland 1,280 2,202 4,266 9,064 18,204 21,367 16.7
United Kingdom 1,707 3,191 4,921 6,907 12,022 18,714 11.0
Portugal 963 997 1,244 2,069 7,343 12,929 13.4
Spain 1,063 1,376 2,255 2,397 8,739 14,227 13.4
United States 1,257 2,445 5,301 9,561 16,689 27,331 21.7
Mexico 759 674 1,732 2,365 4,845 6,655 8.8
Japan 669 737 1,387 1,926 11,439 20,413 30.5
China 600 530 552 439 839 3,117 5.2
India 533 533 673 619 853 1,746 3.3
Africa 418 444 585 852 1,365 1,368 3.3
World 667 867 1,510 2,114 4,104 5,709 8.6
Ratio of richest to poorest 4.4 7.2 8.9 20.6 21.7 20.0

Source: Maddison (2001), Table B-21.

It is clear that the poorer countries are better off today than they were in 1820 (3.3 fold in both Africa and India). But the countries that are now rich grew at a much faster rate. The last column of Table 4 shows that Japan realized the most spectacular gain, climbing from approximately the world average in 1820 to the fifth richest today, with an increase of over thirty fold in real per capita GDP. All countries that are rich today had rapid increases in their material standard of living, realizing more than ten-fold increases since 1820. The underlying reasons for this diversity of economic success is a central question in the field of economic history.

Life Expectancy

Table 5 shows that disparities in life expectancy have been much less than those in per capita GDP. In 1820 all countries were bunched in the range of 21 to 41 years, with Germany at the top and India at the bottom, giving a ratio of less than 2 to 1. It is doubtful that any country or region has had a life expectancy below 20 years for long periods of time because death rates would have exceeded any plausible upper limit for birth rates, leading to population implosion. The twentieth century witnessed a compression in life expectancies across countries, with the ratio of levels in 1999 being 1.56 (81 in Japan versus 52 in Africa). Japan has also been a spectacular performer in health, increasing life expectancy from 34 years in 1820 to 81 years in 1999. Among poor unhealthy countries, health aspects of the standard of living have improved more rapidly than the material standard of living relative to the world average. Because many public health measures are cheap and effective, it has been easier to extend life than it has been to promote material prosperity, which has numerous complicated causes.

Table 5: Life Expectancy at Birth by Country and Year

Country 1820 1900 1950 1999
France 37 47 65 78
Germany 41 47 67 77
Italy 30 43 66 78
Netherlands 32 52 72 78
Spain 28 35 62 78
Sweden 39 56 70 79
United Kingdom 40 50 69 77
United States 39 47 68 77
Japan 34 44 61 81
Russia 28 32 65 67
Brazil 27 36 45 67
Mexico n.a. 33 50 72
China n.a. 24 41 71
India 21 24 32 60
Africa 23 24 38 52
World 26 31 49 66

n.a.: not available.

Source: Maddison (2001), Table 1-5a.

Height Comparisons

Figure 1 compares stature in the United States and the United Kingdom. Americans were very tall by global standards in the early nineteenth century as a result of their rich and varied diets, low population density, and relative equality of wealth. Unlike other countries that have been studied (France, the Netherlands, Sweden, Germany, Japan and Australia), both the U.S. and the U.K. suffered significant height declines during industrialization (as defined primarily by the achievement of modern economic growth) in the nineteenth century. (Note, however, that the amount and timing of the height decline in the U.K. has been the subject of a lively debate in the Economic History Review involving Roderick Floud, Kenneth Wachter and John Komlos; only the Floud-Wachter figures are given here.)

Source: Steckel (2002, Figure 12) and Floud, Wachter and Gregory (1990, table 4.8).

One may speculate that the timing of the declines shown in the Figure 1 is probably more coincidental than emblematic of linkage among similar causal factors across the two countries. While it is possible that growing trade and commerce spread disease, as in the United States, it is more likely that a major culprit in the U.K was rapid urbanization and associated increased in exposure to diseases. This conclusion is reached by noting that urban-born men were substantially shorter than the rural born, and between the periods of 1800-1830 and 1830–1870 the share of the British population living in urban areas leaped from 38.7 to 54.1%.

References

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited byRobert William Fogel and Stanley L. Engerman. New York: Harper and Row, 1971.

Engerman, Stanley L. “The Standard of Living Debate in International Perspective: Measures and Indicators.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud. Chicago: University of Chicago Press, 1997.

Floud, Roderick, Kenneth W. Wachter and Annabel S. Gregory. Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980. Cambridge: Cambridge University Press, 1990.

Haines, Michael. “Vital Statistics.” In Historical Statistics of the United States: Millennial Edition, edited by Susan Carter, Scott Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, forthcoming, 2002.

Komlos, John. “Shrinking in a Growing Economy? The Mystery of Physical Stature during the Industrial Revolution.” Journal of Economic History 58, no. 3 (1998): 779-802.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Meeker, Edward. “Medicine and Public Health.” In Encyclopedia of American Economic History, edited by Glenn Porter. New York: Scribner, 1980.

Pope, Clayne L. “Adult Mortality in America before 1900: A View from Family Histories.” In Strategic Factors in Nineteenth-Century American Economic History: A Volume to Honor Robert W. Fogel, edited by Claudia Goldin and Hugh Rockoff. Chicago: University of Chicago Press, 1992.

Porter, Roy, editor. The Cambridge Illustrated History of Medicine. Cambridge: Cambridge University Press, 1996.

Steckel, Richard H. “Health, Nutrition and Physical Well-Being.” In Historical Statistics of the United States: Millennial Edition, edited by Susan Carter, Scott Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, forthcoming, 2002.

Steckel, Richard H. “Industrialization and Health in Historical Perspective.” In Poverty, Inequality and Health, edited by David Leon and Gill Walt. Oxford: Oxford University Press, 2000.

Steckel, Richard H. “Strategic Ideas in the Rise of the New Anthropometric History and Their Implications for Interdisciplinary Research.” Journal of Economic History 58, no. 3 (1998): 803-21.

Steckel, Richard H. “Stature and the Standard of Living.” Journal of Economic Literature 33, no. 4 (1995): 1903-1940.

Steckel, Richard H. “A Peculiar Population: The Nutrition, Health, and Mortality of American Slaves from Childhood to Maturity.” Journal of Economic History 46, no. 3 (1986): 721-41.

Steckel, Richard H. and Roderick Floud, editors. Health and Welfare during Industrialization. Chicago: University of Chicago Press, 1997.

Citation: Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 21, 2002. URL http://eh.net/encyclopedia/a-history-of-the-standard-of-living-in-the-united-states/

Slavery in the United States

Jenny Bourne, Carleton College

Slavery is fundamentally an economic phenomenon. Throughout history, slavery has existed where it has been economically worthwhile to those in power. The principal example in modern times is the U.S. South. Nearly 4 million slaves with a market value estimated to be between $3.1 and $3.6 billion lived in the U.S. just before the Civil War. Masters enjoyed rates of return on slaves comparable to those on other assets; cotton consumers, insurance companies, and industrial enterprises benefited from slavery as well. Such valuable property required rules to protect it, and the institutional practices surrounding slavery display a sophistication that rivals modern-day law and business.

THE SPREAD OF SLAVERY IN THE U.S.

Not long after Columbus set sail for the New World, the French and Spanish brought slaves with them on various expeditions. Slaves accompanied Ponce de Leon to Florida in 1513, for instance. But a far greater proportion of slaves arrived in chains in crowded, sweltering cargo holds. The first dark-skinned slaves in what was to become British North America arrived in Virginia — perhaps stopping first in Spanish lands — in 1619 aboard a Dutch vessel. From 1500 to 1900, approximately 12 million Africans were forced from their homes to go westward, with about 10 million of them completing the journey. Yet very few ended up in the British colonies and young American republic. By 1808, when the trans-Atlantic slave trade to the U.S. officially ended, only about 6 percent of African slaves landing in the New World had come to North America.

Slavery in the North

Colonial slavery had a slow start, particularly in the North. The proportion there never got much above 5 percent of the total population. Scholars have speculated as to why, without coming to a definite conclusion. Some surmise that indentured servants were fundamentally better suited to the Northern climate, crops, and tasks at hand; some claim that anti-slavery sentiment provided the explanation. At the time of the American Revolution, fewer than 10 percent of the half million slaves in the thirteen colonies resided in the North, working primarily in agriculture. New York had the greatest number, with just over 20,000. New Jersey had close to 12,000 slaves. Vermont was the first Northern region to abolish slavery when it became an independent republic in 1777. Most of the original Northern colonies implemented a process of gradual emancipation in the late eighteenth and early nineteenth centuries, requiring the children of slave mothers to remain in servitude for a set period, typically 28 years. Other regions above the Mason-Dixon line ended slavery upon statehood early in the nineteenth century — Ohio in 1803 and Indiana in 1816, for instance.

TABLE 1
Population of the Original Thirteen Colonies, selected years by type

1750 1750 1790 1790 1790 1810 1810 1810 1860 1860 1860

State

White Black White Free Slave White Free Slave White Free Slave
Nonwhite Nonwhite Nonwhite
108,270 3,010 232,236 2,771 2,648 255,179 6,453 310 451,504 8,643 - Connecticut
27,208 1,496 46,310 3,899 8,887 55,361 13,136 4,177 90,589 19,829 1,798 Delaware
4,200 1,000 52,886 398 29,264 145,414 1,801 105,218 591,550 3,538 462,198 Georgia
97,623 43,450 208,649 8,043 103,036 235,117 33,927 111,502 515,918 83,942 87,189 Maryland
183,925 4,075 373,187 5,369 - 465,303 6,737 - 1,221,432 9,634 - Massachusetts
26,955 550 141,112 630 157 182,690 970 - 325,579 494 - New Hampshire
66,039 5,354 169,954 2,762 11,423 226,868 7,843 10,851 646,699 25,318 - New Jersey
65,682 11,014 314,366 4,682 21,193 918,699 25,333 15,017 3,831,590 49,145 - New York
53,184 19,800 289,181 5,041 100,783 376,410 10,266 168,824 629,942 31,621 331,059 North Carolina
116,794 2,872 317,479 6,531 3,707 786,804 22,492 795 2,849,259 56,956 - Pennsylvania
29,879 3,347 64,670 3,484 958 73,214 3,609 108 170,649 3,971 - Rhode Island
25,000 39,000 140,178 1,801 107,094 214,196 4,554 196,365 291,300 10,002 402,406 South Carolina
129,581 101,452 442,117 12,866 292,627 551,534 30,570 392,518 1,047,299 58,154 490,865 Virginia
934,340 236,420 2,792,325 58,277 681,777 4,486,789 167,691 1,005,685 12,663,310 361,247 1,775,515 United States

Source: Historical Statistics of the U.S. (1970), Franklin (1988).

Slavery in the South

Throughout colonial and antebellum history, U.S. slaves lived primarily in the South. Slaves comprised less than a tenth of the total Southern population in 1680 but grew to a third by 1790. At that date, 293,000 slaves lived in Virginia alone, making up 42 percent of all slaves in the U.S. at the time. South Carolina, North Carolina, and Maryland each had over 100,000 slaves. After the American Revolution, the Southern slave population exploded, reaching about 1.1 million in 1810 and over 3.9 million in 1860.

TABLE 2
Population of the South 1790-1860 by type

Year White Free Nonwhite Slave
1790 1,240,454 32,523 654,121
1800 1,691,892 61,575 851,532
1810 2,118,144 97,284 1,103,700
1820 2,867,454 130,487 1,509,904
1830 3,614,600 175,074 1,983,860
1840 4,601,873 207,214 2,481,390
1850 6,184,477 235,821 3,200,364
1860 8,036,700 253,082 3,950,511

Source: Historical Statistics of the U.S. (1970).

Slave Ownership Patterns

Despite their numbers, slaves typically comprised a minority of the local population. Only in antebellum South Carolina and Mississippi did slaves outnumber free persons. Most Southerners owned no slaves and most slaves lived in small groups rather than on large plantations. Less than one-quarter of white Southerners held slaves, with half of these holding fewer than five and fewer than 1 percent owning more than one hundred. In 1860, the average number of slaves residing together was about ten.

TABLE 3
Slaves as a Percent of the Total Population
selected years, by Southern state

1750 1790 1810 1860
State Black/total Slave/total Slave/total Slave/total
population population population population
Alabama 45.12
Arkansas 25.52
Delaware 5.21 15.04 5.75 1.60
Florida 43.97
Georgia 19.23 35.45 41.68 43.72
Kentucky 16.87 19.82 19.51
Louisiana 46.85
Maryland 30.80 32.23 29.30 12.69
Mississippi 55.18
Missouri 9.72
North Carolina 27.13 25.51 30.39 33.35
South Carolina 60.94 43.00 47.30 57.18
Tennessee 17.02 24.84
Texas 30.22
Virginia 43.91 39.14 40.27 30.75
Overall 37.97 33.95 33.25 32.27

Sources: Historical Statistics of the United States (1970), Franklin (1988).

TABLE 4
Holdings of Southern Slaveowners
by states, 1860

State Total Held 1 Held 2 Held 3 Held 4 Held 5 Held 1-5 Held 100- Held 500+
slaveholders slave slaves Slaves slaves slaves slaves 499 slaves slaves
AL 33,730 5,607 3,663 2,805 2,329 1,986 16,390 344 -
AR 11,481 2,339 1,503 1,070 894 730 6,536 65 1
DE 587 237 114 74 51 34 510 - -
FL 5,152 863 568 437 365 285 2,518 47 -
GA 41,084 6,713 4,335 3,482 2,984 2,543 20,057 211 8
KY 38,645 9,306 5,430 4,009 3,281 2,694 24,720 7 -
LA 22,033 4,092 2,573 2,034 1,536 1,310 11,545 543 4
MD 13,783 4,119 1,952 1,279 1,023 815 9,188 16 -
MS 30,943 4,856 3,201 2,503 2,129 1,809 14,498 315 1
MO 24,320 6,893 3,754 2,773 2,243 1,686 17,349 4 -
NC 34,658 6,440 4,017 3,068 2,546 2,245 18,316 133 -
SC 26,701 3,763 2,533 1,990 1,731 1,541 11,558 441 8
TN 36,844 7,820 4,738 3,609 3,012 2,536 21,715 47 -
TX 21,878 4,593 2,874 2,093 1,782 1,439 12,781 54 -
VA 52,128 11,085 5,989 4,474 3,807 3,233 28,588 114 -
TOTAL 393,967 78,726 47,244 35,700 29,713 24,886 216,269 2,341 22

Source: Historical Statistics of the United States (1970).

Rapid Natural Increase in U.S. Slave Population

How did the U.S. slave population increase nearly fourfold between 1810 and 1860, given the demise of the trans-Atlantic trade? They enjoyed an exceptional rate of natural increase. Unlike elsewhere in the New World, the South did not require constant infusions of immigrant slaves to keep its slave population intact. In fact, by 1825, 36 percent of the slaves in the Western hemisphere lived in the U.S. This was partly due to higher birth rates, which were in turn due to a more equal ratio of female to male slaves in the U.S. relative to other parts of the Americas. Lower mortality rates also figured prominently. Climate was one cause; crops were another. U.S. slaves planted and harvested first tobacco and then, after Eli Whitney’s invention of the cotton gin in 1793, cotton. This work was relatively less grueling than the tasks on the sugar plantations of the West Indies and in the mines and fields of South America. Southern slaves worked in industry, did domestic work, and grew a variety of other food crops as well, mostly under less abusive conditions than their counterparts elsewhere. For example, the South grew half to three-quarters of the corn crop harvested between 1840 and 1860.

INSTITUTIONAL FRAMEWORK

Central to the success of slavery are political and legal institutions that validate the ownership of other persons. A Kentucky court acknowledged the dual character of slaves in Turner v. Johnson (1838): “[S]laves are property and must, under our present institutions, be treated as such. But they are human beings, with like passions, sympathies, and affections with ourselves.” To construct slave law, lawmakers borrowed from laws concerning personal property and animals, as well as from rules regarding servants, employees, and free persons. The outcome was a set of doctrines that supported the Southern way of life.

The English common law of property formed a foundation for U.S. slave law. The French and Spanish influence in Louisiana — and, to a lesser extent, Texas — meant that Roman (or civil) law offered building blocks there as well. Despite certain formal distinctions, slave law as practiced differed little from common-law to civil-law states. Southern state law governed roughly five areas: slave status, masters’ treatment of slaves, interactions between slaveowners and contractual partners, rights and duties of noncontractual parties toward others’ slaves, and slave crimes. Federal law and laws in various Northern states also dealt with matters of interstate commerce, travel, and fugitive slaves.

Interestingly enough, just as slave law combined elements of other sorts of law, so too did it yield principles that eventually applied elsewhere. Lawmakers had to consider the intelligence and volition of slaves as they crafted laws to preserve property rights. Slavery therefore created legal rules that could potentially apply to free persons as well as to those in bondage. Many legal principles we now consider standard in fact had their origins in slave law.

Legal Status Of Slaves And Blacks

By the end of the seventeenth century, the status of blacks — slave or free — tended to follow the status of their mothers. Generally, “white” persons were not slaves but Native and African Americans could be. One odd case was the offspring of a free white woman and a slave: the law often bound these people to servitude for thirty-one years. Conversion to Christianity could set a slave free in the early colonial period, but this practice quickly disappeared.

Skin Color and Status

Southern law largely identified skin color with status. Those who appeared African or of African descent were generally presumed to be slaves. Virginia was the only state to pass a statute that actually classified people by race: essentially, it considered those with one quarter or more black ancestry as black. Other states used informal tests in addition to visual inspection: one-quarter, one-eighth, or one-sixteenth black ancestry might categorize a person as black.

Even if blacks proved their freedom, they enjoyed little higher status than slaves except, to some extent, in Louisiana. Many Southern states forbade free persons of color from becoming preachers, selling certain goods, tending bar, staying out past a certain time of night, or owning dogs, among other things. Federal law denied black persons citizenship under the Dred Scott decision (1857). In this case, Chief Justice Roger Taney also determined that visiting a free state did not free a slave who returned to a slave state, nor did traveling to a free territory ensure emancipation.

Rights And Responsibilities Of Slave Masters

Southern masters enjoyed great freedom in their dealings with slaves. North Carolina Chief Justice Thomas Ruffin expressed the sentiments of many Southerners when he wrote in State v. Mann (1829): “The power of the master must be absolute, to render the submission of the slave perfect.” By the nineteenth century, household heads had far more physical power over their slaves than their employees. In part, the differences in allowable punishment had to do with the substitutability of other means of persuasion. Instead of physical coercion, antebellum employers could legally withhold all wages if a worker did not complete all agreed-upon services. No such alternate mechanism existed for slaves.

Despite the respect Southerners held for the power of masters, the law — particularly in the thirty years before the Civil War — limited owners somewhat. Southerners feared that unchecked slave abuse could lead to theft, public beatings, and insurrection. People also thought that hungry slaves would steal produce and livestock. But masters who treated slaves too well, or gave them freedom, caused consternation as well. The preamble to Delaware’s Act of 1767 conveys one prevalent view: “[I]t is found by experience, that freed [N]egroes and mulattoes are idle and slothful, and often prove burdensome to the neighborhood wherein they live, and are of evil examples to slaves.” Accordingly, masters sometimes fell afoul of the criminal law not only when they brutalized or neglected their slaves, but also when they indulged or manumitted slaves. Still, prosecuting masters was extremely difficult, because often the only witnesses were slaves or wives, neither of whom could testify against male heads of household.

Law of Manumission

One area that changed dramatically over time was the law of manumission. The South initially allowed masters to set their slaves free because this was an inherent right of property ownership. During the Revolutionary period, some Southern leaders also believed that manumission was consistent with the ideology of the new nation. Manumission occurred only rarely in colonial times, increased dramatically during the Revolution, then diminished after the early 1800s. By the 1830s, most Southern states had begun to limit manumission. Allowing masters to free their slaves at will created incentives to emancipate only unproductive slaves. Consequently, the community at large bore the costs of young, old, and disabled former slaves. The public might also run the risk of having rebellious former slaves in its midst.

Antebellum U.S. Southern states worried considerably about these problems and eventually enacted restrictions on the age at which slaves could be free, the number freed by any one master, and the number manumitted by last will. Some required former masters to file indemnifying bonds with state treasurers so governments would not have to support indigent former slaves. Some instead required former owners to contribute to ex-slaves’ upkeep. Many states limited manumissions to slaves of a certain age who were capable of earning a living. A few states made masters emancipate their slaves out of state or encouraged slaveowners to bequeath slaves to the Colonization Society, which would then send the freed slaves to Liberia. Former slaves sometimes paid fees on the way out of town to make up for lost property tax revenue; they often encountered hostility and residential fees on the other end as well. By 1860, most Southern states had banned in-state and post-mortem manumissions, and some had enacted procedures by which free blacks could voluntarily become slaves.

Other Restrictions

In addition to constraints on manumission, laws restricted other actions of masters and, by extension, slaves. Masters generally had to maintain a certain ratio of white to black residents upon plantations. Some laws barred slaves from owning musical instruments or bearing firearms. All states refused to allow slaves to make contracts or testify in court against whites. About half of Southern states prohibited masters from teaching slaves to read and write although some of these permitted slaves to learn rudimentary mathematics. Masters could use slaves for some tasks and responsibilities, but they typically could not order slaves to compel payment, beat white men, or sample cotton. Nor could slaves officially hire themselves out to others, although such prohibitions were often ignored by masters, slaves, hirers, and public officials. Owners faced fines and sometimes damages if their slaves stole from others or caused injuries.

Southern law did encourage benevolence, at least if it tended to supplement the lash and shackle. Court opinions in particular indicate the belief that good treatment of slaves could enhance labor productivity, increase plantation profits, and reinforce sentimental ties. Allowing slaves to control small amounts of property, even if statutes prohibited it, was an oft-sanctioned practice. Courts also permitted slaves small diversions, such as Christmas parties and quilting bees, despite statutes that barred slave assemblies.

Sale, Hire, And Transportation Of Slaves

Sales of Slaves

Slaves were freely bought and sold across the antebellum South. Southern law offered greater protection to slave buyers than to buyers of other goods, in part because slaves were complex commodities with characteristics not easily ascertained by inspection. Slave sellers were responsible for their representations, required to disclose known defects, and often liable for unknown defects, as well as bound by explicit contractual language. These rules stand in stark contrast to the caveat emptor doctrine applied in antebellum commodity sales cases. In fact, they more closely resemble certain provisions of the modern Uniform Commercial Code. Sales law in two states stands out. South Carolina was extremely pro-buyer, presuming that any slave sold at full price was sound. Louisiana buyers enjoyed extensive legal protection as well. A sold slave who later manifested an incurable disease or vice — such as a tendency to escape frequently — could generate a lawsuit that entitled the purchaser to nullify the sale.

Hiring Out Slaves

Slaves faced the possibility of being hired out by their masters as well as being sold. Although scholars disagree about the extent of hiring in agriculture, most concur that hired slaves frequently worked in manufacturing, construction, mining, and domestic service. Hired slaves and free persons often labored side by side. Bond and free workers both faced a legal burden to behave responsibly on the job. Yet the law of the workplace differed significantly for the two: generally speaking, employers were far more culpable in cases of injuries to slaves. The divergent law for slave and free workers does not necessarily imply that free workers suffered. Empirical evidence shows that nineteenth-century free laborers received at least partial compensation for the risks of jobs. Indeed, the tripartite nature of slave-hiring arrangements suggests why antebellum laws appeared as they did. Whereas free persons had direct work and contractual relations with their bosses, slaves worked under terms designed by others. Free workers arguably could have walked out or insisted on different conditions or wages. Slaves could not. The law therefore offered substitute protections. Still, the powerful interests of slaveowners also may mean that they simply were more successful at shaping the law. Postbellum developments in employment law — North and South — in fact paralleled earlier slave-hiring law, at times relying upon slave cases as legal precedents.

Public Transportation

Public transportation also figured into slave law: slaves suffered death and injury aboard common carriers as well as traveled as legitimate passengers and fugitives. As elsewhere, slave-common carrier law both borrowed from and established precedents for other areas of law. One key doctrine originating in slave cases was the “last-clear-chance rule.” Common-carrier defendants that had failed to offer slaves — even negligent slaves — a last clear chance to avoid accidents ended up paying damages to slaveowners. Slaveowner plaintiffs won several cases in the decade before the Civil War when engineers failed to warn slaves off railroad tracks. Postbellum courts used slave cases as precedents to entrench the last-clear-chance doctrine.

Slave Control: Patrollers And Overseers

Society at large shared in maintaining the machinery of slavery. In place of a standing police force, Southern states passed legislation to establish and regulate county-wide citizen patrols. Essentially, Southern citizens took upon themselves the protection of their neighbors’ interests as well as their own. County courts had local administrative authority; court officials appointed three to five men per patrol from a pool of white male citizens to serve for a specified period. Typical patrol duty ranged from one night per week for a year to twelve hours per month for three months. Not all white men had to serve: judges, magistrates, ministers, and sometimes millers and blacksmiths enjoyed exemptions. So did those in the higher ranks of the state militia. In many states, courts had to select from adult males under a certain age, usually 45, 50, or 60. Some states allowed only slaveowners or householders to join patrols. Patrollers typically earned fees for captured fugitive slaves and exemption from road or militia duty, as well as hourly wages. Keeping order among slaves was the patrollers’ primary duty. Statutes set guidelines for appropriate treatment of slaves and often imposed fines for unlawful beatings. In rare instances, patrollers had to compensate masters for injured slaves. For the most part, however, patrollers enjoyed quasi-judicial or quasi-executive powers in their dealings with slaves.

Overseers commanded considerable control as well. The Southern overseer was the linchpin of the large slave plantation. He ran daily operations and served as a first line of defense in safeguarding whites. The vigorous protests against drafting overseers into military service during the Civil War reveal their significance to the South. Yet slaves were too valuable to be left to the whims of frustrated, angry overseers. Injuries caused to slaves by overseers’ cruelty (or “immoral conduct”) usually entitled masters to recover civil damages. Overseers occasionally confronted criminal charges as well. Brutality by overseers naturally generated responses by their victims; at times, courts reduced murder charges to manslaughter when slaves killed abusive overseers.

Protecting The Master Against Loss: Slave Injury And Slave Stealing

Whether they liked it or not, many Southerners dealt daily with slaves. Southern law shaped these interactions among strangers, awarding damages more often for injuries to slaves than injuries to other property or persons, shielding slaves more than free persons from brutality, and generating convictions more frequently in slave-stealing cases than in other criminal cases. The law also recognized more offenses against slaveowners than against other property owners because slaves, unlike other property, succumbed to influence.

Just as assaults of slaves generated civil damages and criminal penalties, so did stealing a slave to sell him or help him escape to freedom. Many Southerners considered slave stealing worse than killing fellow citizens. In marked contrast, selling a free black person into slavery carried almost no penalty.

The counterpart to helping slaves escape — picking up fugitives — also created laws. Southern states offered rewards to defray the costs of capture or passed statutes requiring owners to pay fees to those who caught and returned slaves. Some Northern citizens worked hand-in-hand with their Southern counterparts, returning fugitive slaves to masters either with or without the prompting of law. But many Northerners vehemently opposed the peculiar institution. In an attempt to stitch together the young nation, the federal government passed the first fugitive slave act in 1793. To circumvent its application, several Northern states passed personal liberty laws in the 1840s. Stronger federal fugitive slave legislation then passed in 1850. Still, enough slaves fled to freedom — perhaps as many as 15,000 in the decade before the Civil War — with the help (or inaction) of Northerners that the profession of “slave-catching” evolved. This occupation was often highly risky — enough so that such men could not purchase life insurance coverage — and just as often highly lucrative.

Slave Crimes

Southern law governed slaves as well as slaveowners and their adversaries. What few due process protections slaves possessed stemmed from desires to grant rights to masters. Still, slaves faced harsh penalties for their crimes. When slaves stole, rioted, set fires, or killed free people, the law sometimes had to subvert the property rights of masters in order to preserve slavery as a social institution.

Slaves, like other antebellum Southern residents, committed a host of crimes ranging from arson to theft to homicide. Other slave crimes included violating curfew, attending religious meetings without a master’s consent, and running away. Indeed, a slave was not permitted off his master’s farm or business without his owner’s permission. In rural areas, a slave was required to carry a written pass to leave the master’s land.

Southern states erected numerous punishments for slave crimes, including prison terms, banishment, whipping, castration, and execution. In most states, the criminal law for slaves (and blacks generally) was noticeably harsher than for free whites; in others, slave law as practiced resembled that governing poorer white citizens. Particularly harsh punishments applied to slaves who had allegedly killed their masters or who had committed rebellious acts. Southerners considered these acts of treason and resorted to immolation, drawing and quartering, and hanging.

MARKETS AND PRICES

Market prices for slaves reflect their substantial economic value. Scholars have gathered slave prices from a variety of sources, including censuses, probate records, plantation and slave-trader accounts, and proceedings of slave auctions. These data sets reveal that prime field hands went for four to six hundred dollars in the U.S. in 1800, thirteen to fifteen hundred dollars in 1850, and up to three thousand dollars just before Fort Sumter fell. Even controlling for inflation, the prices of U.S. slaves rose significantly in the six decades before South Carolina seceded from the Union. By 1860, Southerners owned close to $4 billion worth of slaves. Slavery remained a thriving business on the eve of the Civil War: Fogel and Engerman (1974) projected that by 1890 slave prices would have increased on average more than 50 percent over their 1860 levels. No wonder the South rose in armed resistance to protect its enormous investment.

Slave markets existed across the antebellum U.S. South. Even today, one can find stone markers like the one next to the Antietam battlefield, which reads: “From 1800 to 1865 This Stone Was Used as a Slave Auction Block. It has been a famous landmark at this original location for over 150 years.” Private auctions, estate sales, and professional traders facilitated easy exchange. Established dealers like Franklin and Armfield in Virginia, Woolfolk, Saunders, and Overly in Maryland, and Nathan Bedford Forrest in Tennessee prospered alongside itinerant traders who operated in a few counties, buying slaves for cash from their owners, then moving them overland in coffles to the lower South. Over a million slaves were taken across state lines between 1790 and 1860 with many more moving within states. Some of these slaves went with their owners; many were sold to new owners. In his monumental study, Michael Tadman (1989) found that slaves who lived in the upper South faced a very real chance of being sold for profit. From 1820 to 1860, he estimated that an average of 200,000 slaves per decade moved from the upper to the lower South, most via sales. A contemporary newspaper, The Virginia Times, calculated that 40,000 slaves were sold in the year 1830.

Determinants of Slave Prices

The prices paid for slaves reflected two economic factors: the characteristics of the slave and the conditions of the market. Important individual features included age, sex, childbearing capacity (for females), physical condition, temperament, and skill level. In addition, the supply of slaves, demand for products produced by slaves, and seasonal factors helped determine market conditions and therefore prices.

Age and Price

Prices for both male and female slaves tended to follow similar life-cycle patterns. In the U.S. South, infant slaves sold for a positive price because masters expected them to live long enough to make the initial costs of raising them worthwhile. Prices rose through puberty as productivity and experience increased. In nineteenth-century New Orleans, for example, prices peaked at about age 22 for females and age 25 for males. Girls cost more than boys up to their mid-teens. The genders then switched places in terms of value. In the Old South, boys aged 14 sold for 71 percent of the price of 27-year-old men, whereas girls aged 14 sold for 65 percent of the price of 27-year-old men. After the peak age, prices declined slowly for a time, then fell off rapidly as the aging process caused productivity to fall. Compared to full-grown men, women were worth 80 to 90 percent as much. One characteristic in particular set some females apart: their ability to bear children. Fertile females commanded a premium. The mother-child link also proved important for pricing in a different way: people sometimes paid more for intact families.


Source: Fogel and Engerman (1974)

Other Characteristics and Price

Skills, physical traits, mental capabilities, and other qualities also helped determine a slave’s price. Skilled workers sold for premiums of 40-55 percent whereas crippled and chronically ill slaves sold for deep discounts. Slaves who proved troublesome — runaways, thieves, layabouts, drunks, slow learners, and the like — also sold for lower prices. Taller slaves cost more, perhaps because height acts as a proxy for healthiness. In New Orleans, light-skinned females (who were often used as concubines) sold for a 5 percent premium.

Fluctuations in Supply

Prices for slaves fluctuated with market conditions as well as with individual characteristics. U.S. slave prices fell around 1800 as the Haitian revolution sparked the movement of slaves into the Southern states. Less than a decade later, slave prices climbed when the international slave trade was banned, cutting off legal external supplies. Interestingly enough, among those who supported the closing of the trans-Atlantic slave trade were several Southern slaveowners. Why this apparent anomaly? Because the resulting reduction in supply drove up the prices of slaves already living in the U.S and, hence, their masters’ wealth. U.S. slaves had high enough fertility rates and low enough mortality rates to reproduce themselves, so Southern slaveowners did not worry about having too few slaves to go around.

Fluctuations in Demand

Demand helped determine prices as well. The demand for slaves derived in part from the demand for the commodities and services that slaves provided. Changes in slave occupations and variability in prices for slave-produced goods therefore created movements in slave prices. As slaves replaced increasingly expensive indentured servants in the New World, their prices went up. In the period 1748 to 1775, slave prices in British America rose nearly 30 percent. As cotton prices fell in the 1840s, Southern slave prices also fell. But, as the demand for cotton and tobacco grew after about 1850, the prices of slaves increased as well.

Interregional Price Differences

Differences in demand across regions led to transitional regional price differences, which in turn meant large movements of slaves. Yet because planters experienced greater stability among their workforce when entire plantations moved, 84 percent of slaves were taken to the lower South in this way rather than being sold piecemeal.

Time of Year and Price

Demand sometimes had to do with the time of year a sale took place. For example, slave prices in the New Orleans market were 10 to 20 percent higher in January than in September. Why? September was a busy time of year for plantation owners: the opportunity cost of their time was relatively high. Prices had to be relatively low for them to be willing to travel to New Orleans during harvest time.

Expectations and Prices

One additional demand factor loomed large in determining slave prices: the expectation of continued legal slavery. As the American Civil War progressed, prices dropped dramatically because people could not be sure that slavery would survive. In New Orleans, prime male slaves sold on average for $1381 in 1861 and for $1116 in 1862. Burgeoning inflation meant that real prices fell considerably more. By war’s end, slaves sold for a small fraction of their 1860 price.


Source: Data supplied by Stanley Engerman and reported in Walton and Rockoff (1994).

PROFITABILITY, EFFICIENCY, AND EXPLOITATION

That slavery was profitable seems almost obvious. Yet scholars have argued furiously about this matter. On one side stand antebellum writers such as Hinton Rowan Helper and Frederick Law Olmstead, many antebellum abolitionists, and contemporary scholars like Eugene Genovese (at least in his early writings), who speculated that American slavery was unprofitable, inefficient, and incompatible with urban life. On the other side are scholars who have marshaled masses of data to support their contention that Southern slavery was profitable and efficient relative to free labor and that slavery suited cities as well as farms. These researchers stress the similarity between slave markets and markets for other sorts of capital.

Consensus That Slavery Was Profitable

This battle has largely been won by those who claim that New World slavery was profitable. Much like other businessmen, New World slaveowners responded to market signals — adjusting crop mixes, reallocating slaves to more profitable tasks, hiring out idle slaves, and selling slaves for profit. One well-known instance shows that contemporaneous free labor thought that urban slavery may even have worked too well: employees of the Tredegar Iron Works in Richmond, Virginia, went out on their first strike in 1847 to protest the use of slave labor at the Works.

Fogel and Engerman’s Time on the Cross

Carrying the banner of the “slavery was profitable” camp is Nobel laureate Robert Fogel. Perhaps the most controversial book ever written about American slavery is Time on the Cross, published in 1974 by Fogel and co-author Stanley Engerman. These men were among the first to use modern statistical methods, computers, and large datasets to answer a series of empirical questions about the economics of slavery. To find profit levels and rates of return, they built upon the work of Alfred Conrad and John Meyer, who in 1958 had calculated similar measures from data on cotton prices, physical yield per slave, demographic characteristics of slaves (including expected lifespan), maintenance and supervisory costs, and (in the case of females) number of children. To estimate the relative efficiency of farms, Fogel and Engerman devised an index of “total factor productivity,” which measured the output per average unit of input on each type of farm. They included in this index controls for quality of livestock and land and for age and sex composition of the workforce, as well as amounts of output, labor, land, and capital

Time on the Cross generated praise — and considerable criticism. A major critique appeared in 1976 as a collection of articles entitled Reckoning with Slavery. Although some contributors took umbrage at the tone of the book and denied that it broke new ground, others focused on flawed and insufficient data and inappropriate inferences. Despite its shortcomings, Time on the Cross inarguably brought people’s attention to a new way of viewing slavery. The book also served as a catalyst for much subsequent research. Even Eugene Genovese, long an ardent proponent of the belief that Southern planters had held slaves for their prestige value, finally acknowledged that slavery was probably a profitable enterprise. Fogel himself refined and expanded his views in a 1989 book, Without Consent or Contract.

Efficiency Estimates

Fogel’s and Engerman’s research led them to conclude that investments in slaves generated high rates of return, masters held slaves for profit motives rather than for prestige, and slavery thrived in cities and rural areas alike. They also found that antebellum Southern farms were 35 percent more efficient overall than Northern ones and that slave farms in the New South were 53 percent more efficient than free farms in either North or South. This would mean that a slave farm that is otherwise identical to a free farm (in terms of the amount of land, livestock, machinery and labor used) would produce output worth 53 percent more than the free. On the eve of the Civil War, slavery flourished in the South and generated a rate of economic growth comparable to that of many European countries, according to Fogel and Engerman. They also discovered that, because slaves constituted a considerable portion of individual wealth, masters fed and treated their slaves reasonably well. Although some evidence indicates that infant and young slaves suffered much worse conditions than their freeborn counterparts, teenaged and adult slaves lived in conditions similar to — sometimes better than — those enjoyed by many free laborers of the same period.

Transition from Indentured Servitude to Slavery

One potent piece of evidence supporting the notion that slavery provides pecuniary benefits is this: slavery replaces other labor when it becomes relatively cheaper. In the early U.S. colonies, for example, indentured servitude was common. As the demand for skilled servants (and therefore their wages) rose in England, the cost of indentured servants went up in the colonies. At the same time, second-generation slaves became more productive than their forebears because they spoke English and did not have to adjust to life in a strange new world. Consequently, the balance of labor shifted away from indentured servitude and toward slavery.

Gang System

The value of slaves arose in part from the value of labor generally in the antebellum U.S. Scarce factors of production command economic rent, and labor was by far the scarcest available input in America. Moreover, a large proportion of the reward to owning and working slaves resulted from innovative labor practices. Certainly, the use of the “gang” system in agriculture contributed to profits in the antebellum period. In the gang system, groups of slaves perfomed synchronized tasks under the watchful overseer’s eye, much like parts of a single machine. Masters found that treating people like machinery paid off handsomely.

Antebellum slaveowners experimented with a variety of other methods to increase productivity. They developed an elaborate system of “hand ratings” in order to improve the match between the slave worker and the job. Hand ratings categorized slaves by age and sex and rated their productivity relative to that of a prime male field hand. Masters also capitalized on the native intelligence of slaves by using them as agents to receive goods, keep books, and the like.

Use of Positive Incentives

Masters offered positive incentives to make slaves work more efficiently. Slaves often had Sundays off. Slaves could sometimes earn bonuses in cash or in kind, or quit early if they finished tasks quickly. Some masters allowed slaves to keep part of the harvest or to work their own small plots. In places, slaves could even sell their own crops. To prevent stealing, however, many masters limited the products that slaves could raise and sell, confining them to corn or brown cotton, for example. In antebellum Louisiana, slaves even had under their control a sum of money called a peculium. This served as a sort of working capital, enabling slaves to establish thriving businesses that often benefited their masters as well. Yet these practices may have helped lead to the downfall of slavery, for they gave slaves a taste of freedom that left them longing for more.

Slave Families

Masters profited from reproduction as well as production. Southern planters encouraged slaves to have large families because U.S. slaves lived long enough — unlike those elsewhere in the New World — to generate more revenue than cost over their lifetimes. But researchers have found little evidence of slave breeding; instead, masters encouraged slaves to live in nuclear or extended families for stability. Lest one think sentimentality triumphed on the Southern plantation, one need only recall the willingness of most masters to sell if the bottom line was attractive enough.

Profitability and African Heritage

One element that contributed to the profitability of New World slavery was the African heritage of slaves. Africans, more than indigenous Americans, were accustomed to the discipline of agricultural practices and knew metalworking. Some scholars surmise that Africans, relative to Europeans, could better withstand tropical diseases and, unlike Native Americans, also had some exposure to the European disease pool.

Ease of Identifying Slaves

Perhaps the most distinctive feature of Africans, however, was their skin color. Because they looked different from their masters, their movements were easy to monitor. Denying slaves education, property ownership, contractual rights, and other things enjoyed by those in power was simple: one needed only to look at people to ascertain their likely status. Using color was a low-cost way of distinguishing slaves from free persons. For this reason, the colonial practices that freed slaves who converted to Christianity quickly faded away. Deciphering true religious beliefs is far more difficult than establishing skin color. Other slave societies have used distinguishing marks like brands or long hair to denote slaves, yet color is far more immutable and therefore better as a cheap way of keeping slaves separate. Skin color, of course, can also serve as a racist identifying mark even after slavery itself disappears.

Profit Estimates

Slavery never generated superprofits, because people always had the option of putting their money elsewhere. Nevertheless, investment in slaves offered a rate of return — about 10 percent — that was comparable to returns on other assets. Slaveowners were not the only ones to reap rewards, however. So too did cotton consumers who enjoyed low prices and Northern entrepreneurs who helped finance plantation operations.

Exploitation Estimates

So slavery was profitable; was it an efficient way of organizing the workforce? On this question, considerable controversy remains. Slavery might well have profited masters, but only because they exploited their chattel. What is more, slavery could have locked people into a method of production and way of life that might later have proven burdensome.

Fogel and Engerman (1974) claimed that slaves kept about ninety percent of what they produced. Because these scholars also found that agricultural slavery produced relatively more output for a given set of inputs, they argued that slaves may actually have shared in the overall material benefits resulting from the gang system. Other scholars contend that slaves in fact kept less than half of what they produced and that slavery, while profitable, certainly was not efficient. On the whole, current estimates suggest that the typical slave received only about fifty percent of the extra output that he or she produced.

Did Slavery Retard Southern Economic Development?

Gavin Wright (1978) called attention as well to the difference between the short run and the long run. He noted that slaves accounted for a very large proportion of most masters’ portfolios of assets. Although slavery might have seemed an efficient means of production at a point in time, it tied masters to a certain system of labor which might not have adapted quickly to changed economic circumstances. This argument has some merit. Although the South’s growth rate compared favorably with that of the North in the antebellum period, a considerable portion of wealth was held in the hands of planters. Consequently, commercial and service industries lagged in the South. The region also had far less rail transportation than the North. Yet many plantations used the most advanced technologies of the day, and certain innovative commercial and insurance practices appeared first in transactions involving slaves. What is more, although the South fell behind the North and Great Britain in its level of manufacturing, it compared favorably to other advanced countries of the time. In sum, no clear consensus emerges as to whether the antebellum South created a standard of living comparable to that of the North or, if it did, whether it could have sustained it.

Ultimately, the South’s system of law, politics, business, and social customs strengthened the shackles of slavery and reinforced racial stereotyping. As such, it was undeniably evil. Yet, because slaves constituted valuable property, their masters had ample incentives to take care of them. And, by protecting the property rights of masters, slave law necessarily sheltered the persons embodied within. In a sense, the apologists for slavery were right: slaves sometimes fared better than free persons because powerful people had a stake in their well-being.

Conclusion: Slavery Cannot Be Seen As Benign

But slavery cannot be thought of as benign. In terms of material conditions, diet, and treatment, Southern slaves may have fared as well in many ways as the poorest class of free citizens. Yet the root of slavery is coercion. By its very nature, slavery involves involuntary transactions. Slaves are property, whereas free laborers are persons who make choices (at times constrained, of course) about the sort of work they do and the number of hours they work.

The behavior of former slaves after abolition clearly reveals that they cared strongly about the manner of their work and valued their non-work time more highly than masters did. Even the most benevolent former masters in the U.S. South found it impossible to entice their former chattels back into gang work, even with large wage premiums. Nor could they persuade women back into the labor force: many female ex-slaves simply chose to stay at home. In the end, perhaps slavery is an economic phenomenon only because slave societies fail to account for the incalculable costs borne by the slaves themselves.

REFERENCES AND FURTHER READING

For studies pertaining to the economics of slavery, see particularly Aitken, Hugh, editor. Did Slavery Pay? Readings in the Economics of Black Slavery in the United States. Boston: Houghton-Mifflin, 1971.

Barzel, Yoram. “An Economic Analysis of Slavery.” Journal of Law and Economics 20 (1977): 87-110.

Conrad, Alfred H., and John R. Meyer. The Economics of Slavery and Other Studies. Chicago: Aldine, 1964.

David, Paul A., Herbert G. Gutman, Richard Sutch, Peter Temin, and Gavin Wright. Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press, 1976

Fogel , Robert W. Without Consent or Contract. New York: Norton, 1989.

Fogel, Robert W., and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. New York: Little, Brown, 1974.

Galenson, David W. Traders, Planters, and Slaves: Market Behavior in Early English America. New York: Cambridge University Press, 1986

Kotlikoff, Laurence. “The Structure of Slave Prices in New Orleans, 1804-1862.” Economic Inquiry 17 (1979): 496-518.

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Ransom, Roger L., and Richard Sutch “Capitalists Without Capital” Agricultural History 62 (1988): 133-160.

Vedder, Richard K. “The Slave Exploitation (Expropriation) Rate.” Explorations in Economic History 12 (1975): 453-57.

Wright, Gavin. The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton, 1978.

Yasuba, Yasukichi. “The Profitability and Viability of Slavery in the U.S.” Economic Studies Quarterly 12 (1961): 60-67.

For accounts of slave trading and sales, see
Bancroft, Frederic. Slave Trading in the Old South. New York: Ungar, 1931. Tadman, Michael. Speculators and Slaves. Madison: University of Wisconsin Press, 1989.

For discussion of the profession of slave catchers, see
Campbell, Stanley W. The Slave Catchers. Chapel Hill: University of North Carolina Press, 1968.

To read about slaves in industry and urban areas, see
Dew, Charles B. Slavery in the Antebellum Southern Industries. Bethesda: University Publications of America, 1991.

Goldin, Claudia D. Urban Slavery in the American South, 1820-1860: A Quantitative History. Chicago: University of Chicago Press,1976.

Starobin, Robert. Industrial Slavery in the Old South. New York: Oxford University Press, 1970.

For discussions of masters and overseers, see
Oakes, James. The Ruling Race: A History of American Slaveholders. New York: Knopf, 1982.

Roark, James L. Masters Without Slaves. New York: Norton, 1977.

Scarborough, William K. The Overseer: Plantation Management in the Old South. Baton Rouge, Louisiana State University Press, 1966.

On indentured servitude, see
Galenson, David. “Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44 (1984): 1-26.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Grubb, Farley. “Immigrant Servant Labor: Their Occupational and Geographic Distribution in the Late Eighteenth Century Mid-Atlantic Economy.” Social Science History 9 (1985): 249-75.

Menard, Russell R. “From Servants to Slaves: The Transformation of the Chesapeake Labor System.” Southern Studies 16 (1977): 355-90.

On slave law, see
Fede, Andrew. “Legal Protection for Slave Buyers in the U.S. South.” American Journal of Legal History 31 (1987). Finkelman, Paul. An Imperfect Union: Slavery, Federalism, and Comity. Chapel Hill: University of North Carolina, 1981.

Finkelman, Paul. Slavery, Race, and the American Legal System, 1700-1872. New York: Garland, 1988.

Finkelman, Paul, ed. Slavery and the Law. Madison: Madison House, 1997.

Flanigan, Daniel J. The Criminal Law of Slavery and Freedom, 1800-68. New York: Garland, 1987.

Morris, Thomas D., Southern Slavery and the Law: 1619-1860. Chapel Hill: University of North Carolina Press, 1996.

Schafer, Judith K. Slavery, The Civil Law, and the Supreme Court of Louisiana. Baton Rouge: Louisiana State University Press, 1994.

Tushnet, Mark V. The American Law of Slavery, 1810-60: Considerations of Humanity and Interest. Princeton: Princeton University Press, 1981.

Wahl, Jenny B. The Bondsman’s Burden: An Economic Analysis of the Common Law of Southern Slavery. New York: Cambridge University Press, 1998.

Other useful sources include
Berlin, Ira, and Philip D. Morgan, eds. The Slave’s Economy: Independent Production by Slaves in the Americas. London: Frank Cass, 1991.

Berlin, Ira, and Philip D. Morgan, eds, Cultivation and Culture: Labor and the Shaping of Slave Life in the Americas. Charlottesville, University Press of Virginia, 1993.

Elkins, Stanley M. Slavery: A Problem in American Institutional and Intellectual Life. Chicago: University of Chicago Press, 1976.

Engerman, Stanley, and Eugene Genovese. Race and Slavery in the Western Hemisphere: Quantitative Studies. Princeton: Princeton University Press, 1975.

Fehrenbacher, Don. Slavery, Law, and Politics. New York: Oxford University Press, 1981.

Franklin, John H. From Slavery to Freedom. New York: Knopf, 1988.

Genovese, Eugene D. Roll, Jordan, Roll. New York: Pantheon, 1974.

Genovese, Eugene D. The Political Economy of Slavery: Studies in the Economy and Society of the Slave South . Middletown, CT: Wesleyan, 1989.

Hindus, Michael S. Prison and Plantation. Chapel Hill: University of North Carolina Press, 1980.

Margo, Robert, and Richard Steckel. “The Heights of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-538.

Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime. New York: Appleton, 1918.

Stampp, Kenneth M. The Peculiar Institution: Slavery in the Antebellum South. New York: Knopf, 1956.

Steckel, Richard. “Birth Weights and Infant Mortality Among American Slaves.” Explorations in Economic History 23 (1986): 173-98.

Walton, Gary, and Hugh Rockoff. History of the American Economy. Orlando: Harcourt Brace, 1994, chapter 13.

Whaples, Robert. “Where Is There Consensus among American Economic Historians?” Journal of Economic History 55 (1995): 139-154.

Data can be found at
U.S. Bureau of the Census, Historical Statistics of the United States, 1970, collected in ICPSR study number 0003, “Historical Demographic, Economic and Social Data: The United States, 1790-1970,” located at http://fisher.lib.virginia.edu/census/.

Citation: Bourne, Jenny. “Slavery in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/slavery-in-the-united-states/

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1
British and American Mine Safety, 1890 -1904
(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2
Comparative Safety of British and American Railroad Workers, 1889 – 1901
(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers
All causes
1.14 0.95 0.89
British trainmena
All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers
All causes
2.67 2.31 2.50
American trainmen
All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.
a. Guards, brakemen, and shunters.
b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3
Steel Industry fatality and Injury rates, 1910-1939
(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4
Work Injury Rates, Manufacturing and Coal Mining, 1926-1970
(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and

Citation: Aldrich, Mark. “History of Workplace Safety in the United States, 1880-1970″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-workplace-safety-in-the-united-states-1880-1970/

Rural Electrification Administration

Laurence J. Malone, Hartwick College

Market Failure in Delivering Electricity to Rural Areas Before 1930

The advent of the electric industry in the 1880s ushered forward a rapidly expanding domestic market in the United States. The physical scale of the electric utility industry mirrored the national economy that sprung up with it — massive power generation facilities, substantial capital investments for network construction, high maintenance costs, and production technologies that were obtrusive and degrading to the natural environment. But the adaptation of electricity to manufacturing and services further liberated firms from having to locate in proximity to moving water, and with rising immigration from liberal naturalization policies accelerated the pace of economic growth.

While urban households and businesses gained electricity in large numbers after 1910, the more sparsely populated rural regions of the United States were generally without electricity and were denied the commercial progress it brought. Electrical service providers ignored the rural market due to its high network construction costs and the prospect of meager immediate profits. From the supplier standpoint, rural homes, farms and businesses were stretched too far apart and offered too little demand relative to the cost of investment. Unlike their counterparts in cities, rural residents were expected to advance the financing for the necessary infrastructure to the firm supplying electrical power from a distant location. In rural areas that were serviced, electrical rates in the 1920s were commonly twice as high as urban rates (Brown, 1980, p. 5).

The disincentives to investment in electrical infrastructure left rural America increasingly distant from the rising standard of living in the urban and emerging suburban settings of the national economy. Lacking the greater productive efficiencies secured by the adaptation of electricity, productivity growth in agriculture, the industry that served as the central organizing principle for rural life, lagged other sectors in the economy over the 1880 to 1930 period. Rural demands for the newest manufactured items found in urban American homes — telephones, radios, refrigerators, washing machines, hot water heaters, and household appliances — were latent. Given the widening disparities between rural and urban settings, it was not surprising that rural Americans reverted to the cooperative lifestyles of the nineteenth century as the urban markets for their agricultural products collapsed in the Great Depression.

The Origins of the New Deal Rural Electrification Initiative

The failure of the market to deliver affordable electricity to rural locales led to over thirty state rural power initiatives during the 1920s and early 1930s, as President Herbert Hoover argued that responsibility for rural electrification rested with state government (Brown, 1980, pp. 6 and 29). Governor of New York Franklin Delano Roosevelt aggressively promoted rural electrification, and the New York Power Authority was created in 1931 to develop a substantial new source of inexpensive hydroelectric generating capacity along the St. Lawrence River (Brown, 1980, p. 32). But the Depression led to the collapse of many state power authorities and further raised the bar in discouraging private investment in rural electrical infrastructure. When Roosevelt assumed the Presidency on March 4, 1933, the market for new rural electrification investment no longer existed.

While Roosevelt clearly understood the benefits electrification would bring to the rural American economy, it was Morris L. Cooke who provided vision and leadership to rural electrification efforts under the New Deal. Cooke had led Giant Power, the Pennsylvania rural electrification program, and Roosevelt invited him to address the problem at the federal level. Using data supplied by the utility industry, electrical engineers, Giant Power, and the U. S. Census of 1930, Cooke authored an eleven-page report in 1934 that provided the foundation for a federal rural electrification program. In an appendix to the report, Cooke included detailed estimates of the cost per mile of “high wire” distribution lines and suitable construction materials and standards to use in rural regions. He wrote: “This cost of the line with transformers and meters included for one to three customers will range from $500 to $800 the mile. To amortize this cost in twenty years at four percent involves a cost to each of the three customers on a mile of line of about one dollar a month” (Cooke, 1934, p. 6). Studies commissioned by Cooke suggested that household payments for electricity would be a minimum of one dollar per month for the first ten kilowatts of electricity, three cents per kilowatt for the next forty kilowatts, and two cents per kilowatt for the remaining balance (Cooke, 1934, p. 8). All told, the estimated cost to provide electricity to 500,000 farms, at an average of three farms per mile of rural road, was $112 million, or $225 per farm. In a worst case scenario, if new generating facilities were needed for all 500,000 farms, the 333 power plants that would have to be constructed would cost an additional $87 million. Consequently, Cooke’s high-end estimate for the complete electrical infrastructure needed to bring electrical service to 500,000 rural American farms was $200 million, or $400 per farm (Cooke, 1934, p. 9). The concluding paragraph of his report states that a new “rural electrification agency” should build the necessary infrastructure since the market would not otherwise furnish electricity to sparsely populated localities (Cooke, 1934, p.11).

Presidential Executive Order 7037 created the Rural Electrification Administration, or R.E.A., on May 11, 1935. With passage of the Norris-Rayburn Act the following year, Congress authorized $410 million in appropriations for a ten-year program to electrify American farms. The rural cooperative model, which had been successfully employed by Giant Power in Pennsylvania, was adopted by the R.E.A., with Congressional Representatives serving as the administrative liaisons for the formation of cooperatives within their districts (Brown, 1980, p. 68). Cooperatives were not-for-profit consumer-owned firms organized to provide electric service to member-customers. Each cooperative was typically governed by a board of directors elected from the ranks of its residential customers. The board established rates and policies for the cooperative, and hired a general manager to conduct the ordinary business of providing electricity to customers within the service region. Only two restrictions were placed on the formation of cooperatives: they could not compete directly with utility companies, and coop members could not live in areas served by utilities or within a municipality with a population of 1500 or more (Brown, 1980, p. 69).

The R.E.A. was essentially a government-financing agency providing subsidized loans to private companies, public agencies, or cooperatives for the construction of electrical supply infrastructure in rural regions. The loans were guaranteed by the federal government and had an attractive interest rate and a generous repayment schedule of twenty-five years. The interest rate initially matched the federal funds rate when the loan was executed, but after 1944 the rate was fixed at two percent (Joskow and Schmalensee, 1983, p. 17). R.E.A. loans furnished the incentive for rural electric cooperatives to form and connect to the existing electrical network at rates comparable to the national average. R.E.A. cooperatives quickly became one of the largest capital investment projects of the New Deal, and low-cost financing for construction of electrical supply infrastructure was the key provision of the program (Brown, 1980, p 41).

R.E.A.: The Outcomes

Five decades after urban municipal electrical distribution system first appeared in the United States, the process of introducing rural areas to the twentieth-century economy began with the creation of the Rural Electrification Administration. The R.E.A. overcame the unwillingness of private utilities to bring power to households, farms and businesses in sparsely populated regions where profits were too low. The failure of the market, which left rural areas literally and figuratively in the dark, required an aggressive federal initiative to insure that residents of sparsely populated areas were no longer comparatively disadvantaged in the twentieth-century American economy.

The R.E.A. is considered one of the most immediate and profound successes in the history of federal policy-making for the national economy. By the end of 1938, just two years after its inception, 350 cooperative projects in 45 states were delivering electricity to 1.5 million farms (Schurr, Burwell, Devine Jr., Sonenblum, 1990, p. 234). The success of the R.E.A. over the next two decades was even more impressive, especially as a self-sustained financing agency. By the mid-1950s nearly all American farms had electrical service that was provided through the R.E.A. or by other means. Monies lent through the R.E.A. were also largely repaid, as the default rate was less than one percent (Brown, 1980, p 114). Moreover, as with any significant surge in investment, the accompanying new demands for household electrical appliances spurred growth in home appliance manufacturing, and spawned the electrical and plumbing trades in rural communities. Electrical service also brought revolutionary new mediums of communication to rural farms, firms and households. Radio was followed by television, and the new streams of information narrowed the cultural, educational and commercial divide between urban and rural America. Rural electrification contributed to the rapid growth of suburbs, and helped create a more integrated national market.

The REA Today

The R.E.A., originally created by executive order in 1935, was authorized as a federal agency within the United States Department of Agriculture (U.S.D.A.) when Congress passed and President Roosevelt signed the Rural Electrification Act of 1936. After 1949, the R.E.A. was authorized to finance the formation of telephone cooperatives, through low-interest federal loans, to extend telephone service to underserved rural areas. Repeatedly extending the original authorization of a ten-year program of subsidies, the federal government actively promoted rural electrification through the R.E.A. until the end of the twentieth century. In 1994, Congress established the Rural Utilities Service (R.U.S.) as a federal agency within the U.S.D.A., and it absorbed the R.E.A. and its responsibilities for rural electrification and telephone service.

Although the subsidized loans for the R.E.A. helped bring electricity and improved living standards to remote rural locales during the Great Depression, controversy has surrounded the agency in recent decades. Critics argue that the costs of the subsidies for providing electricity and telephone service must be weighed against the benefits. Beneficiaries of the R.E.A. enjoyed considerable interest rate subsidies throughout the second half of the twentieth century, long after the end of the Depression. Today, almost all rural Americans have electric service and 98 percent have telephone service. Critics of federally subsidized electrical cooperatives suggest that service would not be reduced if the subsidies were to end.

Table 1 compares the share of the electric utility market for investor owned companies, publicly owned companies, and rural cooperatives in the United States in 1998. Cooperatives served eleven percent of the nation’s population and delivered nine percent of kilowatt hours sold. The data show that in contrast to investor or publicly owned firms, the rural market continues to impose hardships to producers for costs and revenues. Rural electric cooperatives account for a much smaller portion of revenue per mile of wire ($7,873) than investor or publicly owned electrical utilities, and a greater portion of distribution plant investment per consumer ($2,352).

Table 1: Electric Utility Market Comparisons, United States, 1998

Investor Owned Publicly Owned Rural Cooperatives
Number of Organizations 239 2009 930
Customers, % of U.S. total 74% 15% 11%
Revenues, % of U.S. total 77% 14% 9%
Kilowatt hour sales, % of total 75% 15% 9%
Number of consumers, per mile of line 33 43 6
Revenue per mile of line, in dollars 60,921 70,670 7,873
Distribution plant investment per consumer, in dollars 1,890 1,870 2,352
Assets, in $ billions 606 126 70

Source: National Rural Electric Cooperative Association Strategic Analysis, March 1999, www.nreca.org/coops/elecoop3.html

As Table 1 indicates, rural electric cooperatives continue to serve sparsely populated areas in the United States as not-for-profit public utilities. The R.U.S., which oversees rural electric and telephone cooperatives, has even begun to encourage the development of rural municipal water and waste disposal systems. To date, the R.E.A. and R.U.S. have organized nearly $57 billion in federally guaranteed low interest loans for the development of electric and telephone cooperatives. In recent years, despite calls for the elimination of the R.U.S., legislation has been introduced in Congress to extend its authority to offer low interest loans to firms willing to provide high-speed (broadband) Internet access to rural America (Malone, 2000, pp.12-13). As markets further expand, and rural America is comparatively disadvantaged relative to suburban and urban regions, advocates are likely to call for federal initiatives to address the disparities that arise between rural and urban and suburban regions from market failures and disincentives to investment in new forms of infrastructure.

References:

Brown, D.C. Electricity for Rural America. Westport, CT: Greenwood Press, 1980.

Cooke, Morris L. “National Plan for the Advancement of Rural Electrification under Federal Leadership and Control with State and Local Cooperation and as a Wholly Public Enterprise.” Franklin Delano Roosevelt Presidential Library, Hyde Park, NY: Cooke Papers, Box 230, February, 1934.

Cooke, Morris L. “The Early Days of the Rural Electrification Idea, 1914-1936.” American Political Science Review 42 (June 1948).

Joskow, Paul L. and Richard Schmalensee. Markets for Power: An Analysis of Electric Utility Deregulation. Cambridge, Massachusetts: MIT Press, 1983.

Malone, Laurence J. “Commonalities: The R.E.A. and High-Speed Rural Internet Access.” Washington, D.C.: United States Internet Council, www.usic.org, 2000.

National Rural Electric Cooperative Association, Strategic Analysis, March 1999, www.nreca.org/coops/elecoop3.html

Schurr, Sam H., Calvin C. Burwell, Warren D. Devine, and Sidney Sonenblum. Electricity in the American Economy: Agent of Technological Progress. Westport, CT: Contributions in Economics and Economic History, Number 117, Greenwood Press, 1990.

United States Department of Agriculture, Rural Utilities Service homepage, www.usda.gov/rus/.

Citation: Malone, Laurence. “Rural Electrification Administration”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/rural-electrification-administration/