EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

History of the U.S. Telegraph Industry

Tomas Nonnenmacher, Allegheny College

Introduction

The electric telegraph was one of the first telecommunications technologies of the industrial age. Its immediate predecessors were homing pigeons, visual networks, the Pony Express, and railroads. By transmitting information quickly over long distances, the telegraph facilitated the growth in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms. This entry focuses on the industrial organization of the telegraph industry from its inception through its demise and the industry’s impact on the American economy.

The Development of the Telegraph

The telegraph was similar to many other inventions of the nineteenth century. It replaced an existing technology, dramatically reduced costs, was monopolized by a single firm, and ultimately was displaced by a newer technology. Like most radical new technologies, the telecommunications revolution of the mid-1800s was not a revolution at all, but rather consisted of many inventions and innovations in both technology and industrial organization. This section is broken into four parts, each reviewing an era of telegraphy: precursors to the electric telegraph, early industrial organization of the industry, Western Union’s dominance, and the decline of the industry.

Precursors to the Electric Telegraph

Webster’s definition of a telegraph is “an apparatus for communicating at a distance by coded signals.” The earliest telegraph systems consisted of smoke signals, drums, and mirrors used to reflect sunlight. In order for these systems to work, both parties (the sender and the receiver) needed a method of interpreting the signals. Henry Wadsworth Longfellow’s poem recounting Paul Revere’s ride (“One if by land, two if by sea, and I on the opposite shore will be”) gives an example of a simple system. The first extensive telegraph network was the visual telegraph. In 1791 the Frenchman Claude Chappe used a visual network (which consisted of a telescope, a clock, a codebook, and black and white panels) to send a message ten miles. He called his invention the télégraphe, or far writer. Chappe refined and expanded his network, and by 1799 his telegraph consisted of a network of towers with mechanical arms spread across France. The position of the arms was interpreted using a codebook with over 8,000 entries.

Technological Advances

Due to technological difficulties, the electric telegraph could not at first compete with the visual telegraph. The basic science of the electric telegraph is to send an electric current through a wire. Breaking the current in a particular pattern denotes letters or phrases. The Morse code, named after Samuel Morse, is still used today. For instance, the code for SOS (… — …) is a well-known call for help. Two elements had to be perfected before an electric telegraph could work: a means of sending the signal (generating and storing electricity) and receiving the signal (recording the breaks in the current).

The science behind the telegraph dates back at least as far as Roger Bacon’s (1220-1292) experiments in magnetism. Numerous small steps in the science of electricity and magnetism followed. Important inventions include those of Giambattista della Porta (1558), William Gilbert (1603), Stephen Gray (1729), William Watson (1747), Pieter van Musschenbroek (1754), Luigi Galvani (1786), Alessandro Giuseppe Antonio Anastasio Volta (1800), André-Marie Ampere (1820), William Sturgeon (1825), and Joseph Henry (1829). A much longer list could be made, but the point is that no single person can be credited with developing the necessary technology of the telegraph.

1830-1866: Development and Consolidation of the Electric Telegraph Industry

In 1832, Samuel Morse returned to the United States from his artistic studies in Europe. While discussing electricity with fellow passengers, Morse conceived of the idea of a single-wire electric telegraph. No one until this time had Morse’s zeal for the applicability of electromagnetism to telecommunications or his conviction of its eventual profitability. Morse obtained a patent in the United States in 1838 but split his patent right to gain the support of influential partners. He obtained a $30,000 grant from Congress in 1843 to build an experimental line between Baltimore and Washington. The first public message over Morse’s line (“What hath God wrought?”) echoed the first message over Chappe’s system (“If you succeed, you will bask in glory”). Both indicated the inventors’ convictions about the importance of their systems.

Morse and His Partners

Morse realized early on that he was incapable of handling the business end of the telegraph and hired Amos Kendall, a former Postmaster General and a member of Andrew Jackson’s “Kitchen Cabinet,” to manage his business affairs. By 1848 Morse had consolidated the partnership to four members. Kendall managed the three-quarters of the patent belonging to Morse, Leonard Gale, and Alfred Vail. Gale and Vail had helped Morse develop the telegraph’s technology. F.O.J. Smith, a former Maine Representative whose help was instrumental in obtaining the government grant, decided to retain direct control of his portion of the patent right. The partnership agreement was vague, and led to discord between Kendall and Smith. Eventually the partners split the patent right geographically. Smith controlled New England, New York, and the upper-Midwest, and Morse controlled the rest of the country.

The availability of financing influenced the early industrial organization of the telegraph. Initially, Morse tried to sell his patent to the government, Kendall, Smith, and several groups of businessmen, but all attempts were unsuccessful. Kendall then attempted to generate interest in building a unified system across the country. This too failed, leaving Kendall to sell the patent right piecemeal to regional interests. These lines covered the most potentially profitable routes, emanating from New York and reaching Washington, Buffalo, Boston and New Orleans. Morse also licensed feeder lines to supply main lines with business.

Rival Patents

Royal House and Alexander Bain introduced rival patents in 1846 and 1849. Entrepreneurs constructed competing lines on the major eastern routes using the new patents. The House device needed a higher quality wire and more insulation as it was a more precise instrument. It had a keyboard at one end and printed out letters at the other. At its peak, it could send messages considerably faster than Morse’s technique. The Bain device was similar to Morse’s, except that instead of creating dots and dashes, it discolored a piece of chemically treated paper by sending an electric current through it. Neither competitor had success initially, leading Kendall to underestimate their eventual impact on the market.

By 1851, ten separate firms ran lines into New York City. There were three competing lines between New York and Philadelphia, three between New York and Boston, and four between New York and Buffalo. In addition, two lines operated between Philadelphia to Pittsburgh, two between Buffalo and Chicago, three between points in the Midwest and New Orleans, and entrepreneurs erected lines between many Midwestern cities. In all, in 1851 the Bureau of the Census reported 75 companies with 21,147 miles of wire.

Multilateral Oligopolies

The telegraph markets in 1850 were multilateral oligopolies. The term “multilateral” means that the production process extended in several directions. Oligopolies are markets in which a small number of firms strategically interact. Telegraph firms competed against rivals on the same route, but sought alliances with firms with which they connected. For example, four firms (New York, Albany & Buffalo; New York State Printing; Merchants’ State; and New York and Erie) competed on the route between New York City and Buffalo. Rates fell dramatically (by more than 50%) as new firms entered, so this market was quite competitive for a while. But each of these firms sought to create an alliance with connecting firms, such as those with lines from New York City to Boston or Washington. Increased business from exchanging messages meant increased profitability.

Mistransmission Problems

Quality competition was also fierce, with the line that erected the best infrastructure and supplied the fastest service usually dominating other, less capable firms. Messages could easily be garbled, and given the predominately business-related use of the telegraph, a garbled message was often worse than no message at all. A message sent from Boston to St. Louis could have traveled over the lines of five firms. Due to the complexity of the production process, messages were also often lost, with no firm taking responsibility for the mistransmission. This lack of responsibility gave firms an incentive to provide a lower quality service compared to an integrated network. These issues ultimately contributed to the consolidation of the industry.

Horizontal and System Integration

Horizontal integration-integration between two competing firms-and system integration-integration between two connecting firms-occurred in the telegraph industry during different periods. System integration occurred between 1846 and 1852, as main lines acquired most of the feeder lines in the country. In 1852 the Supreme Court declared the Bain telegraph an infringement on Morse’s patent, and Bain lines merged with Morse lines across the country. Between 1853 and 1857 regional monopolies formed and signed the “Treaty of Six Nations,” a pooling agreement between the six largest regional firms. During this phase the industry experienced both horizontal and system integration. By the end of the period, most remaining firms were regional monopolists, controlled several large cities and owned both the House and the Morse patents. Figure 1 shows the locations of these firms.

Figure 1: Treaty of Six Nations

Source: Thompson, p. 315

The final phase of integration occurred between 1857 and 1866. In this period the pool members consolidated into a national monopoly. By 1864 only Western Union and the American Telegraph Company remained of the “Six Nations.” The United States Telegraph Company entered the field by consolidating smaller, independent firms in the early 1860s, and operated in the territory of both the American Telegraph Company and Western Union. By 1866 Western Union absorbed its last two competitors and reached its position of market dominance.

Efficiency versus Market Power

Horizontal and system integration had two causes: efficiency and market power. Horizontal integration created economies of scale that could be realized from placing all of the wires between two cities on the same route or all the offices in a city in the same location. This consolidation reduced the cost of maintaining multiple lines. The reduction in competition due to horizontal integration also allowed firms to charge a higher price and earn monopoly profits. The efficiency gain from system integration was better control of messages travelling long distances. With responsibility for the message placed clearly in the hands of one firm, messages were transmitted with more care. System integration also created monopoly power, since to compete with a large incumbent system, a new entrant would have to also create a large infrastructure.

1866-1900: Western Union’s Dominance

The period from 1866 through the turn of the century was the apex of Western Union’s power. Yearly messages sent over its lines increased from 5.8 million in 1867 to 63.2 million in 1900. Over the same period, transmission rates fell from an average of $1.09 to 30 cents per message. Even with these lower prices, roughly 30 to 40 cents of every dollar of revenue were net profit for the company. Western Union faced three threats during this period: increased government regulation, new entrants into the field of telegraphy, and new competition from the telephone. The last two were the most important to the company’s future profitability.

Western Union Fends off Regulation

Western Union was the first nationwide industrial monopoly, with over 90% of the market share and dominance in every state. The states and the federal government responded to this market power. State regulation was largely futile given the interstate character of the industry. On the federal level, bills were introduced in almost every session of Congress calling for either regulation of or government entry into the industry. Western Union’s lobby was able to block almost any legislation. The few regulations that were passed either helped Western Union maintain its control over the market or were never enforced.

Western Union’s Smaller Rivals

Western Union’s first rival was the Atlantic and Pacific Telegraph Company, a conglomeration of new and merged lines created by Jay Gould in 1874. Gould sought to wrest control of Western Union from the Vanderbilts, and he succeeded in 1881 when the two firms merged. A more permanent rival appeared in the 1880s in the form of the Postal Telegraph Company. John Mackay, who had already made a fortune at the Comstock Lode, headed this firm. Mackay did what many of his telegraph predecessors did in the 1850s: create a network by buying out existing bankrupt firms and merging them into a network with large enough economies of scale to compete with Western Union. Postal never challenged Western Union’s market dominance, but did control over 10-20% of the market at various times.

The Threat from the Telephone

Western Union’s greatest threat came from a new technology, the telephone. Alexander Graham Bell patented the telephone in 1876, initially referring to it as a “talking telegraph.” Bell offered Western Union the patent for the telephone for $100,000, but the company declined to purchase it. Western Union could have easily gained control of AT&T in the 1890s, but management decided that higher dividends were more important than expansion. The telephone was used in the 1880s only for local calling, but with the development in the 1890s of “long lines,” the telephone offered increased competition to the telegraph. In 1900, local calls accounted for 97% of the telephone’s business, and it was not until the twentieth century that the telephone fully displaced the telegraph.

1900-1988: Increased Competition and Decline

The twentieth century saw the continued rise of the telephone and decline of the telegraph. Telegraphy continued to have a niche in inexpensive long-distance and international communication, including teletypewriters, Telex, and stock ticker. As shown in Table 1, after 1900, the rise in telegraph traffic slowed, and after 1930, the number of messages sent began to decline.

Table 1: Messages Handled by the Telegraph Network: 1870-1970

Date Messages Handled Date Messages Handled
1870 9,158,000 1930 211,971,000
1880 29,216,000 1940 191,645,000
1890 55,879,000 1945 236,169,000
1900 63,168,000 1950 178,904,000
1910 75,135,000 1960 124,319,000
1920 155,884,000 1970 69,679,000

Source: Historical Statistics.
Notes: Western Union messages 1870-1910; all telegraph companies, 1920-1970.

AT&T Obtains Western Union, Then Gives It Up

In 1909, AT&T gained control of Western Union by purchasing 30% of its stock. In many ways, the companies were heading in opposite directions. AT&T was expanding rapidly, while Western Union was content to reap handsome profits and issue large dividends but not reinvest in itself. Under AT&T’s ownership, Western Union was revitalized, but the two companies separated in 1913, succumbing to pressure from the Department of Justice. In 1911, the Department of Justice successfully used the Sherman Antitrust Act to force a breakup of Standard Oil. This success made the threat of antitrust action against AT&T very credible. Both Postal Telegraph and the independent telephone companies wishing to interconnect with AT&T lobbied for government regulation. In order to forestall any such government action, AT&T issued the “Kingsbury Commitment,” a unilateral commitment to divest itself of Western Union and allow independent telephone firms to interconnect.

Decline of the Telegraph

The telegraph flourished in the 1920s, but the Great Depression hit the industry hard, and it never recovered to its previous position. AT&T introduced the teletypewriter exchange service in 1931. The teletypewriter and the Telex allowed customers to install a machine on their premises that would send and receive messages directly. In 1938, AT&T had 18%, Postal 15% and Western Union 64% of telegraph traffic. In 1945, 236 million domestic messages were sent, generating $182 million in revenues. This was the most messages sent in a year over the telegraph network in the United States. By that time, Western Union had incorporated over 540 telegraph and cable companies into its system. The last important merger was between Western Union and Postal, which occurred in 1945. This final merger was not enough to stop the continuing rise of the telephone or the telegraph’s decline. Already in 1945, AT&T’s revenues and transmission dwarfed those of Western Union. AT&T made $1.9 billion in yearly revenues by transmitting 89.4 million local phone calls and 4.9 million toll calls daily. Table 2 shows the increasing competitiveness of telephone rates with telegraph rates.

Table 2: Telegraph and Telephone Rates from New York City to Chicago: 1850-1970

Date Telegraph* Telephone**
1850 $1.55
1870 1.00
1890 .40
1902 5.45
1919 .60 4.65
1950 .75 1.50
1960 1.45 1.45
1970 2.25 1.05

Source: Historical Statistics.
Notes: * Beginning 1960, for 15 word message. Prior to 1960 for 10 word message. ** Rates for station-to station, daytime, 3-minute call

The Effects of the Telegraph

The travel time from New York City to Cleveland in 1800 was two weeks, with another four weeks necessary to reach Chicago. By 1830, those travel times had fallen in half, and by 1860 it took only two days to reach Chicago from New York City. However, by use of the telegraph, news could travel between those two cities almost instantaneously. This section examines three instances where the telegraph affected economic growth: railroads, high throughput firms, and financial markets.

Telegraphs and Railroads

The telegraph and the railroad were natural partners in commerce. The telegraph needed the right of way that the railroads provided and the railroads needed the telegraph to coordinate the arrival and departure of trains. These synergies were not immediately recognized. Only in 1851 did railways start to use telegraphy. Prior to that, telegraph wires strung along the tracks were seen as a nuisance, occasionally sagging and causing accidents and even fatalities.

The greatest savings of the telegraph were from the continued use of single-tracked railroad lines. Prior to 1851, the U.S. system was single-tracked, and trains ran on a time-interval system. Two types of accidents could occur. Trains running in opposite directions could run into one another, as could trains running in the same direction. The potential for accidents required that railroad managers be very careful in dispatching trains. One way to reduce the number of accidents would have been to double-track the system. A second, better, way was to use the telegraph.

Double-tracking was a good alternative, but not perfect. Double-tracked lines would eliminate head-on collisions, but not same direction ones. This would still need to be done using a timing system, i.e. requiring a time interval between departing trains. Accidents were still possible using this system. By using the telegraph, station managers knew exactly what trains were on the tracks under their supervision. Double-tracking the U.S. rail system in 1893 has been estimated to cost $957 million. Western Union’s book capitalization was $123 million in 1893, making this seem like a good investment. Of course, the railroads could have used a system like Chappe’s visual telegraph to coordinate traffic, but such a system would have been less reliable and would not have been able to handle the same volume of traffic.

Telegraph and Perishable Products Industries

Other industries that had a high inventory turnover also benefited from the telegraph. Of particular importance were industries in which the product was perishable. These industries included meatpacking and the distribution of fruits and vegetables. The growth of both of these industries was facilitated by the introduction of the refrigerated car in 1874. The telegraph was required for the exact control of shipments. For instance, refrigeration and the telegraph allowed for the slaughter and disassembly of livestock in the giant stockyards of Chicago, Kansas City, St. Louis and Omaha. Beef would then be shipped east at a cost of 50% that of shipping the live cattle. The centralization of the stockyards also created tremendous amounts of by-products that could be processed into glue, tallow, dye, fertilizer, feed, brushes, false teeth, gelatin, oleomargarine, and many other useful products.

Telegraph and Financial Markets

The telegraph undoubtedly had a major impact on the structure of financial markets in the United States. New York became the financial center of the country, setting prices for a variety of commodities and financial instruments. Among these were beef, corn, wheat, stocks and bonds. As the telegraph spread, so too did the centralization of prices. For instance, in 1846, wheat and corn prices in Buffalo lagged four days behind those in New York City. In 1848, the two markets were linked telegraphically and prices were set simultaneously.

The centralization of stock prices helped make New York the financial capital of the United States. Over the course of the nineteenth century, hundreds of exchanges appeared and then disappeared across the country. Few of them remained, with only those in New York, Philadelphia, Boston, Chicago and San Francisco achieving any permanence. By 1910, 90 percent of all bond and two-thirds of all stock trades occurred on the New York Stock Exchange.

Centralization of the market created much more liquidity for stockholders. As the number of potential traders increased, so too did the ability to find a buyer or seller of a financial instrument. This increase in liquidity may have led to an increase in the total amount invested in the market, therefore leading to higher levels of investment and economic growth. Centralization may also have led to the development of certain financial institutions that could not have been developed otherwise. Although difficult to quantify, these aspects of centralization certainly had a positive effect on economic growth.

In some respects, we may tend to overestimate the telegraph’s influence on the economy. The rapid distribution of information may have had a collective action problem associated with it. If no one else in Buffalo has a piece of information, such as the change in the price of wheat in New York City, then there is a large private incentive to discover that piece of information quickly. But once everyone has the information, no one made better off. A great deal of effort may have been spent on an endeavor that, from society’s perspective, did not increase overall efficiency. The centralization in New York also increased the gains from other wealth-neutral or wealth-reducing activities, such as speculation and market manipulation. Higher volumes of trading increased the payoff from the successful manipulation of a market, yet did not increase society’s wealth.

Conclusion

The telegraph accelerated the speed of business transactions during the late nineteenth century and contributed to the industrialization of the United States. Like most industries, it faced new competition that ultimately proved its downfall. The telephone was easier and faster to use, and the telegraph ultimately lost its cost-advantages. In 1988, Western Union divested itself of its telegraph infrastructure and focused on financial services, such as money orders. A Western Union telegram is still available, currently costing $9.95 for 250 words.

Telegraph Timeline

1837 Cooke and Wheatstone patent telegraph in England.
1838 Morse’s Electro-Magnetic Telegraph patent approved.
1843 First message sent between Washington and Baltimore.
1846 First commercial telegraph line completed. The Magnetic Telegraph Company’s lines ran from New York to Washington.
House’s Printing Telegraph patent approved.
1848 Associated Press formed to pool telegraph traffic.
1849 Bain’s Electro-Chemical patent approved.
1851 Hiram Sibley and associates incorporate New York and Mississippi Valley Printing Telegraph Company. Later became Western Union.
1851 Telegraph first used to coordinate train departures.
1857 Treaty of Six Nations is signed, creating a national cartel
1859 First transatlantic cable is laid from Newfoundland to Valentia, Ireland. Fails after 23 days, having been used to send a total of 4,359 words. Total cost of laying the line was $1.2 million.
1861 First Transcontinental telegraph completed.
1866 First successful transatlantic telegraph laid
Western Union merges with major remaining rivals.
1867 Stock ticker service inaugurated.
1870 Western Union introduces the money order service.
1876 Alexander Graham Bell patents the telephone.
1908 AT&T gains control of Western Union. Divests itself of Western Union in 1913.
1924 AT&T offers Teletype system.
1926 Inauguration of the direct stock ticker circuit from New York to San Francisco.
1930 High-speed tickers can print 500 words per minute.
1945 Western Union and Postal Telegraph Company merge.
1962 Western Union offers Telex for international teleprinting.
1974 Western Union places Westar satellite in operation.
1988 Western Union Telegraph Company reorganized as Western Union Corporation. The telecommunications assets were divested and Western Union focuses on money transfers and loan services.

References

Blondheim, Menahem. News over the Wires. Cambridge: Harvard University Press, 1994.

Brock, Gerald. The Telecommunications Industry. Cambridge: Harvard University Press, 1981.

DuBoff, Richard. “Business Demand and the Development of the Telegraph in the United States, 1844-1860.” Business History Review 54 (1980): 461-477.

Field, Alexander. “The Telegraphic Transmission of Financial Asset Prices and Orders to Trade: Implications for Economic Growth, Trading Volume, and Securities Market Regulation.” Research in Economic History 18 (1998).

Field, Alexander. “French Optical Telegraphy, 1793-1855: Hardware, Software, Administration.” Technology and Culture 35 (1994): 315-47.

Field, Alexander. “The Magnetic Telegraph, Price and Quantity Data, and the New Management of Capital.” Journal of Economic History 52 (1992): 401-13.

Gabler, Edwin. The American Telegrapher: A Social History 1860-1900. New Brunswick: Rutgers University Press, 1988.

Goldin, H. H. “Governmental Policy and the Domestic Telegraph Industry.” Journal of Economic History 7 (1947): 53-68.

Israel, Paul. From Machine Shop to Industrial Laboratory. Baltimore: Johns Hopkins, 1992.

Lefferts, Marshall. “The Electric Telegraph: its Influence and Geographical Distribution.” American Geographical and Statistical Society Bulletin, II (1857).

Nonnenmacher, Tomas. “State Promotion and Regulation of the Telegraph Industry, 1845-1860.” Journal of Economic History 61 (2001).

Oslin, George. The Story of Telecommunications. Macon: Mercer University Press, 1992.

Reid, James. The Telegraph in America. New York: Polhemus, 1886.

Thompson, Robert. Wiring a Continent, Princeton: Princeton University Press, 1947.

U.S. Bureau of the Census. Report of the Superintendent of the Census for December 1, 1852, Washington: Robert Armstrong, 1853.

U.S. Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970: Bicentennial Edition, Washington: GPO, 1976.

Yates, JoAnne. “The Telegraph’s Effect on Nineteenth Century Markets and Firms.” Business and Economic History 15 (1986):149-63.

Citation: Nonnenmacher, Tomas. “History of the U.S. Telegraph Industry”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-the-u-s-telegraph-industry/

An Economic History of Weather Forecasting

Erik D. Craft, University of Richmond

Introduction

The United States Congress established a national weather organization in 1870 when it instructed the Secretary of War to organize the collection of meteorological observations and forecasting of storms on the Great Lakes and Atlantic Seaboard. Large shipping losses on the Great Lakes during the 1868 and 1869 seasons, growing acknowledgement that storms generally traveled from the West to the East, a telegraphic network that extended west of the Great Lakes and the Atlantic Seaboard, and an eager Army officer promising military discipline are credited with convincing Congress that a storm-warning system was feasible. The United States Army Signal Service weather organization immediately dwarfed its European counterparts in budget and geographical size and shortly thereafter created storm warnings that on the Great Lakes alone led to savings in shipping losses that exceeded the entire network’s expenses.

Uses of Weather Information

Altering Immediate Behavior

The most obvious use of weather information is to change behavior in response to expected weather outcomes. The motivating force behind establishing weather organizations in England, France, Germany, and the United States was to provide warnings to ships of forthcoming storms, so that the ships might remain in harbor. But it soon became obvious that agricultural and commercial interests would benefit from weather forecasts as well. Farmers could protect fruit sensitive to freezes, and shippers could limit spoilage of produce while en route. Beyond preparation for severe weather, weather forecasts are now created for ever more specialized activities: implementing military operations, scheduling operation of power generation facilities, routing aircraft safely and efficiently, planning professional sports teams’ strategies, estimating demand for commodities sensitive to weather outcomes, planning construction projects, and optimizing the use of irrigation and reservoir systems’ resources.

Applying Climatological Knowledge

Climatological data can be used to match crop varieties, construction practices, and other activities appropriately to different regions. For example, in 1947 the British Government planned to grow groundnuts on 3.2 million acres in East and Central Africa. The groundnut was chosen because it was suited to the average growing conditions of the chosen regions. But due a lack of understanding of the variance in amount and timing of rainfall, the project was abandoned after five years and initial capital outlays of 24 million British pounds and annual operating costs of 7 million pounds. The preparation of ocean wind and weather charts in the 1850s by Matthew Fontaine Maury, Superintendent of the U.S. Navy’s Depot of Charts and Instruments, identified better routes for vessels sailing between America and Europe and from the United States East Cost to United States West Coast. The reduced sailing durations are alleged to have saved millions of dollars annually. Climatological data can also be used in modern environmental forecasts of air quality and how pollution is dispersed in the air. There are even forensic meteorologists who specialize in identifying weather conditions at a given point in time after accidents and subsequent litigation. Basic climatological information is also one reason why the United States cinema industry became established in Southern California; it was known that a high percentage of all days were sunny, so that outdoor filming would not be delayed.

Smoothing Consumption of Weather-Sensitive Commodities

An indirect use of weather forecasts and subsequent weather occurrences is their influence on the prices of commodities that are affected by weather outcomes. Knowledge that growing conditions will be poor or have been poor will lead to expectations of a smaller crop harvest. This causes expected prices of the crop to rise, thereby slowing consumption. This is socially efficient, since the present inventory and now smaller future harvest will have to be consumed more slowly over the time period up until the next season’s crop can be planted, cultivated, and harvested. Without an appropriate rise in price after bad weather outcomes, an excessive depletion of the crop’s inventory could result, leading to more variability in the consumption path of the commodity. People generally prefer consuming their income and individual products in relatively smooth streams, rather than in large amounts in some periods and small amounts in other periods. Both improved weather forecasts and United State Department of Agriculture crop forecasts help buyers more effectively consume a given quantity of a crop.

The History Weather Forecasts in the United States

An important economic history question is whether or not it was necessary for the United States Federal Government to found a weather forecasting organization. There are two challenges in answering that question: establishing that the weather information was socially valuable and determining if private organizations were incapable of providing the appropriate level of services. Restating the latter issue, did weather forecasts and the gathering of climatological information possess enough attributes of a public good such that private organizations would create an insufficiently large amount of socially- beneficial information? There are also two parts to this latter public good problem: nonexcludability and nonrivalry. Could private producers of weather information create a system whereby they earned enough money from users of weather information to cover the costs of creating the information? Would such a weather system be of the socially optimal size?

Potential Organizational Sources of Weather Forecasts

There were many organizations during the 1860s that the observer might imagine would benefit from the creation of weather forecasts. After the consolidation of most telegraphic service in the United States into Western Union in 1866, an organization with employees throughout the country existed. The Associated Press had a weather-reporting network, but there is no evidence that it considered supplementing its data with forecasts. One Ebenezer E. Merriam began supplying New York newspapers with predictions in 1856. Many years later, astronomer turned Army Signal Service forecaster Cleveland Abbe concluded that Merriam made his predictions using newspaper weather reports. The Chicago Board of Trade declined an invitation in 1869 to support a weather forecasting service based in Cincinnati. Neither ship-owners nor marine insurers appear to have expressed any interest in creating or buying weather information. Great Lakes marine insurers had even already overcome organizational problems by forming the Board of Lake Underwriters in 1855. For example, the group incurred expenses of over $11,000 in 1861 inspecting vessels and providing ratings on behalf of its members in the annual Lake Vessel Register. The Board of Lake Underwriters even had nine inspectors distributed on the Great Lakes to inspect wrecks on behalf of its members. Although there was evidence that storms generally traveled in a westerly direction, none of these groups apparently expected the benefits to itself to exceed the costs of establishing the network necessary to provide useful weather information.

Cleveland Abbe at the Cincinnati Observatory began the most serious attempt to establish a quasi-private meteorological organization in 1868 when he sought financial support from the Associated Press, Western Union, local newspapers, and the Cincinnati Chamber of Commerce. His initial plan included a system of one hundred reporting stations with the Associated Press covering the $100 instrument costs at half of the stations and the dispatch costs. In the following year, he widened his scope to include the Chicago Board of Trade and individual subscribers and proposed a more limited network of between sixteen and twenty-two stations. The Cincinnati Chamber of Commerce, whose president published the Cincinnati Commercial, funded the experiment from September through November of 1869. Abbe likely never had more than ten observers report on any given day and could not maintain more than about thirty local subscribers for his service, which provided at most only occasional forecasts. Abbe continued to receive assistance from Western Union in the collection and telegraphing of observations after the three-month trial, but he fell short in raising funds to allow the expansion of his network to support weather forecasts. His ongoing “Weather Bulletin of the Cincinnati Observatory” was not even published in the Cincinnati Commercial.

Founding of the Army Signal Service Weather Organization

Just as the three-month trial of Abbe’s weather bulletin concluded, Increase A. Lapham, a Milwaukee natural scientist, distributed his second list of Great Lakes shipping losses, entitled “Disaster on the Lakes.” The list included 1,164 vessel casualties, 321 deaths, and $3.1 million in property damaged in 1868 and 1,914 vessel casualties, 209 lives lost, and $4.1 million in financial losses in 1869. The number of ships that were totally destroyed was 105 and 126 in each year, respectively. According to a separate account, the storm of November 16-19, 1869 alone destroyed vessels whose value exceeded $420,000. Lapham’s list of losses included a petition to establish a weather forecasting service. In 1850, he had prepared a similar proposal alongside a list of shipping of losses, and twice during the 1850s he had tracked barometric lows across Wisconsin to provide evidence that storms could be forecast.

Recipients of Lapham’s petitions included the Wisconsin Academy of Sciences, the Chicago Academy of Sciences, the National Board of Trade meeting in Richmond, a new Chicago monthly business periodical entitled The Bureau, and Congressman Halbert E. Paine of Milwaukee. Paine had studied meteorological theories under Professor Elias Loomis at Western Reserve College and would introduce storm-warning service bills and eventually the final joint resolution in the House that gave the Army Signal Service storm-warning responsibilities. In his book Treatise on Meteorology (1868), Loomis claimed that the approach of storms to New York could be predicted reliably given telegraphic reports from several locations in the Mississippi Valley. From December 1869 through February 1870, Lapham’s efforts received wider attention. The Bureau featured nine pieces on meteorology from December until March, including at least two by Lapham.

Following the Civil War, the future of a signaling organization in the Army was uncertain. Having had budget requests for telegraph and signal equipment for years 1870 and 1871 cut in half to $5000, Colonel Albert J. Myer, Chief Signal Officer, led a small organization seeking a permanent existence. He visited Congressman Paine’s office in December of 1869 with maps showing proposed observation stations throughout the United Stations. Myer’s eagerness for the weather responsibilities, as well as the discipline of the Army organization and a network of military posts in the West, many linked via telegraph, would appear to have made the Army Signal Service a natural choice. The marginal costs of an Army weather organization using Signal Service personnel included only instruments and commercial telegraphy expenses. On February 4, 1870, Congress approved the Congressional Joint Resolution which “authorizes and requires the Secretary of War to provide for taking of meteorological observations . . . and for giving notice on the northern lakes and on the sea-coast of the approach and force of storms.” Five days later, President Grant signed the bill.

Expansion of the Army Signal Service’s Weather Bureau

Observer-sergeants in the Signal Service recorded their first synchronous observations November 1, 1870, 7:35 a.m. Washington time at twenty-four stations. The storm-warning system began formal operation October 23, 1871 with potential flag displays at eight ports on the Great Lakes and sixteen ports on the Atlantic seaboard. At that time, only fifty general observation stations existed. Already by June 1872, Congress expanded the Army Signal Service’s explicit forecast responsibilities via an appropriations act to most of the United States “for such stations, reports, and signal as may be found necessary for the benefit of agriculture and commercial interests.” In 1872, the Signal Service also began publication of the Weekly Weather Chronicle during the growing seasons. It disappeared in 1877, reemerging in 1887 as the Weather Crop Bulletin. As the fall of 1872 began, confidence in the utility of weather information was so high that 89 agricultural societies and 38 boards of trade and chambers of commerce had appointed meteorological committees to communicate with the Army Signal Service. In addition to dispensing general weather forecasts for regions of the country three times a day, the Signal Service soon sent special warnings to areas in danger of cold waves and frosts.

The original method of warning ships of dangerous winds was hoisting a single red flag with a black square located in the middle. This was known as a cautionary signal, and Army personnel at Signal Service observation stations or civilians at display stations would raise the flag on a pole “whenever the winds are expected to be as strong as twenty-five miles per hour, and to continue so for several hours, within a radius of one hundred miles from the station.” In the first year of operation ending 1 September 1872, 354 cautionary signals were flown on both the Great Lakes and the Atlantic Seaboard, approximately 70% of which were verified as having met the above definition. Such a measure of accuracy is incomplete, however, as it can always be raised artificially by not forecasting storms under marginal conditions, even though such a strategy might diminish the value of the service.

The United States and Canada shared current meteorological information beginning in 1871. By 1880, seventeen Canadian stations reported meteorological data to the United States at least twice daily by telegraph. The number of Army Signal Service stations providing telegraphic reports three times a day stabilized at 138 stations in 1880, dipped to 121 stations in 1883, and grew to approximately 149 stations by 1888. (See Table 1 for a summary of the growth of the Army Signal Service Meteorological Network from 1870 to 1890.) Additional display stations only provided storm warnings at sea and lake ports. River stations monitored water levels in order to forecast floods. Special cotton-region stations, beginning in 1883, comprised a dense network of daily reporters of rainfall and maximum and minimum temperatures. Total Army Signal Service expenditures grew from a $15,000 supplemental appropriation for weather operations in fiscal year 1870 to about one million dollars for all Signal Service costs around 1880 and stabilized at that level. Figure 1 shows the extent geographical extent of the Army Signal Service telegraphic observation network in 1881.

Figure 1: Army Signal Service Observation Network in 1881
Click on the image for the larger, more detailed image (~600K)Source: Map between pages 250-51, Annual Report of the Chief Signal Officer, October 1, 1881, Congressional Serial Set Volume 2015. See the detailed map between pages 304-05 for the location of each of the different types of stations listed in Table 1.

Table 1: Growth of the United States Army Signal Service Meteorological Network

Budget (Real 1880 Dollars)

Stations of the Second Order

Stations of the Third Order

Repair Stations

Display Stations

Special River Stations

Special Cotton-Region Stations

1870

32,487

25

1871

112,456

54

1872

220,269

65

1873

549,634

80

9

1874

649,431

92

20

1875

749,228

98

20

1876

849,025

106

38

23

1877

849,025

116

29

10

9

23

1878

978,085

136

36

12

11

23

1879

1,043,604

158

30

17

46

30

1880

1,109,123

173

39

49

50

29

1881

1,080,254

171

47

44

61

29

87

1882

937,077

169

45

3

74

30

127

1883

950,737

143

42

27

7

30

124

1884

1,014,898

138

68

7

63

40

138

1885

1,085,479

152

58

8

64

66

137

1886

1,150,673

146

33

11

66

69

135

1887

1,080,291

145

31

13

63

70

133

1888

1,063,639

149

30

24

68

78

116

1889

1,022,031

148

32

23

66

72

114

1890

994,629

144

34

15

73

72

114

Sources: Report of the Chief Signal Officer: 1888, p. 171; 1889, p. 136; 1890, p. 203 and “Provision of Value of Weather Information Services,” Craft (1995), p. 34.

Notes: The actual total budgets for years 1870 through 1881 are estimated. Stations of the second order recorded meteorological conditions three times per day. Most immediately telegraphed the data. Stations of the third order recorded observations at sunset. Repair stations maintained Army telegraph lines. Display stations displayed storm warnings on the Great Lakes and Atlantic seaboard. Special river stations monitored water levels in order to forecast floods. Special cotton-region stations collected high temperature, low temperature, and precipitation data from a denser network of observation locations

Early Value of Weather Information

Budget reductions in the Army Signal Service’s weather activities in 1883 led to the reduction of fall storm-warning broadcast locations on the Great Lakes from 80 in 1882 to 43 in 1883. This one-year drop in the availability of storm-warnings creates a special opportunity to measure the value of warnings of extremely high winds on the Great Lakes (see Figure 2). Many other factors can be expected to affect the value of shipping losses on the Great Lakes: the level of commerce in a given season, the amount of shipping tonnage available to haul a season’s commerce, the relative composition of the tonnage (steam versus sail), the severity of the weather, and long-term trends in technological change or safety. Using a statistical technique know as multiple regression, in which the effect of these many factors on shipping losses are analyzed concurrently, Craft (1998) argued that each extra storm-warning location on the Great Lakes lowered losses by about one percent. This implies that the storm-warning system reduced losses on the Great Lakes by approximately one million dollars annually in the mid 1870s and between $1 million and $4.5 million dollars per year by the early 1880s.

Source: The data are found in the following: Chicago Daily Inter Ocean (December 5, 1874 p. 2; December 18, 1875; December 27, 1876 p. 6; December 17, 1878; December 29, 1879 p. 6; February 3, 1881 p. 12; December 28, 1883 p. 3; December 5, 1885 p. 4); Marine Record (December 27, 1883 p. 5; December 25, 1884 pp. 4-5; December 24, 1885 pp. 4-5; December 30, 1886 p. 6; December 15, 1887 pp 4-5); Chief Signal Officer, Annual Report of the Chief Signal Officer, 1871- 1890.

Note: Series E 52 of the Historical Statistics of the United States (U.S. Bureau of the Census, 1975) was used to adjust all values to real 1880 dollars.

There are additional indirect methods with which to confirm the preceding estimate of the value of early weather information. If storm-warnings actually reduced the risk of damage to cargo and ships due to bad weather, then the cost of shipping cargo would be expected to decline. In particular, such reductions in shipping prices due to savings in losses caused by storms can be differentiated from other types of technological improvements by studying how fall shipping prices changed relative to summer shipping prices. It was during the fall that ships were particularly vulnerable to accidents caused by storms. Changes is shipping prices of grain from Chicago to Buffalo during the summers and falls from the late 1860s to late 1880s imply that storm-warnings were valuable and are consistent with the more direct method estimating reductions in shipping losses. Although marine insurance premia data for shipments on the Great Lakes are limited and difficult to interpret due the waning and waxing of the insurance cartel’s cohesion, such data are also supportive of the overall interpretation.

Given Army Signal Service budgets of about one million dollars for providing meteorological services to the entire United States, a reasonable minimum bound for the rate of return to the creation of weather information from 1870 to 1888 is 64 percent. The figure includes no social benefits from any weather information other than Great Lakes storm warnings. This estimate of nineteenth century information implies that the creation and distribution of storm warnings by the United States Federal Government were a socially beneficial investment.

Transfer of Weather Services to the Department of Agriculture

The Allison Commission hearings in 1884 and 1885 sought to determine the appropriate organization of Federal agencies whose activities included scientific research. The Allison Commission’s long report included testimony and discussion relating to the organization of the Army Signal Service, the United States Geological Survey, the Coast and Geodetic Survey, and the Navy Hydrographic Office. Weather forecasting required a reliable network of observers, some of whom were the sole Army personnel at a location. Advantages of a military organizational structure included a greater range of disciplinary responses, including court-martials for soldiers, for deficient job performance. Problems, however, of the military organization included the limited ability to increase one’s rank while working for the Signal Service and tension between the civilian and military personnel. In 1891, after an unsuccessful Congressional attempt at reform in 1887, the Weather Bureau became a civilian organization when it joined the young Department of Agriculture.

Aviation and World War I

Interest in upper air weather conditions grew rapidly after the turn of the century on account of two related events: the development of aviation and World War I. Safe use of aircraft depended on more precise knowledge of weather conditions (winds, storms, and visibility) between takeoff and landing locations. Not only were military aircraft introduced during World War I, but understanding wind conditions was also crucial to the use of poison gas on the front lines. In the most important change of the Weather Bureau’s organizational direction since transfer to the Department of Agricultural, Congress passed the Air Commerce Act in 1926, which by 1932 led to 38% of the Weather Bureau’s budget being directed toward aerology research and support.

Transfer of the Weather Bureau to the Department of Commerce

Even though aerological expenditures by the Weather Bureau in support of aviation rivaled funding for general weather services by the late 1930s, the Weather Bureau came under increasing criticism from aviation interests. The Weather Bureau was transferred to the Department of Commerce in 1940 where other support for aviation already originated. This transition mirrored the declining role of agriculture in the United States and movement toward a more urban economy. Subsequently known as the United States Weather Service, it has remained there since.

World War II

During World War II, weather forecasts assumed greater importance, as aircraft and rapid troop movements became key parts of military strategy. Accurate long-range artillery use also depended on knowledge of prevailing winds. For extensive use of weather forecasts and climatological information during wartime, consider Allied plans the strike German oil refineries in Ploesti, Romania. In the winter of 1943 military weather teams parachuted into the mountains of Yugoslavia to relay weather data. Bombers from North Africa could only reach the refineries in the absence of headwinds in either direction of the sortie. Cloud cover en route was important for protection, clear skies were helpful for identification of targets, and southerly winds permitted the bombers to drop their ordinance on the first pass on the south side of the area’s infrastructure, allowing the winds to assist in spreading the fire. Historical data indicated that only March or August were possible windows. Though many aircraft were lost, the August 1 raid was considered a success.

Tide, wind, and cloud conditions were also crucial in the planning of the invasion of Normandy (planned for June 5 and postponed until June 6 in 1944). The German High Command had been advised by its chief meteorologist that conditions were not opportune for an Allied invasion on the days following June 4. Dissention among American and British military forecasters nearly delayed the invasion further. Had it been deferred until the next date of favorable tide conditions, the invasion would have taken place during the worst June storm in twenty years in the English Channel.

Forecasting in Europe

A storm on November 14, 1854 destroyed the French warship Henri IV and damaged other British and French vessels on the Black Sea involved in the Crimean War. A report from the state-supported Paris Observatory indicated that barometric readings showed that the storm has passed across Europe in about four days. Urban Leverrier, director of the Paris Observatory, concluded that had there been a telegraph line between Vienna and the Crimea, the British and French fleets could have received warnings. Although the United States weather network was preceded by storm-warning systems in the Netherlands in 1860, Great Britain in 1861, and France in 1863, the new United States observation network immediately dwarfed the European organizations in both financial resources and geographical magnitude.

Robert FitzRoy, captain of the Beagle during Darwin’s famous voyage, was appointed director of the Meteorological Department established by the British Board of Trade (a government organization) in 1854. The wreck of the well-constructed iron vessel Royal Charter in a storm with much loss of life in October of 1859 provided another opportunity for a meteorological leader to argue that storms could be tracked and forecast. With support from the Prince Consort, FitzRoy and the Meteorological Department were granted approval to establish a storm-warning service. On February 6, 1861 the first warnings were issued. By August 1861 weather forecasts were issued regularly. By 1863, the Meteorological Department had a budget of three thousand English pounds. Criticism arose from different groups. Scientists wished to establish meteorology on a sound theoretical foundation and differentiate it from astrology. At the time, many publishers of weather almanacs subscribed to various theories of the influence of the moon or other celestial bodies on weather (This is not as outlandish one might suppose; in 1875, well-known economist William Stanley Jevons studied connections between sunspot activity and meteorology with business cycles). Some members of this second group supported the practice of forecasting but were critical of FitzRoy’s technique, perhaps hoping to become alternative sources of forecasts. Amidst the criticism, FitzRoy committed suicide in 1865. Forecasts and warnings were discontinued in 1866 until the warnings resumed two years later. General forecasts were suspended until 1877.

In 1862, Leverrier wrote the French Ministry of Public Education that French naval and commercial interests might be compromised by their dependence on warnings from the British Board of Trade. A storm-warning service in France commenced in July of 1863. Given the general movement of storms westward, neither France nor Britain had the luxury of tracking storms well before they arrived, as would have been possible with the November 1854 storm in the Crimea and as the Army Signal Service soon would be able to do in America. On account of administrative difficulties that were to hinder effective functioning of the service until 1877, French warnings ceased in October 1865 but resumed in May the next year. The French Central Bureau Meteorology was founded only in 1878 with a budget of only $12,000.

After the initiation of storm warning systems that preceded the Army Signal Service weather network, Europe would not achieve meteorological prominence again until the Bergen School of meteorology developed new storm analysis techniques after World War I, which incorporated cold and warm fronts. In the difficult days in Norway during the conclusion of the Great War, meteorological information from the rest of Europe was unavailable. Theoretical physicist turned meteorological researcher Wilhelm Bjerknes appealed to Norway’s national interests in defense, in the development of commercial aviation, and in increased agricultural output to build a dense observation network, whose data helped yield a new paradigm for meteorology.

Conclusion

The first weather forecasts in the United States that were based on a large network of simultaneous observations provided information to society that was much more valuable than the cost of production. There was discussion in the early winter of 1870 between the scientist Increase Lapham and a businessman in Chicago of the feasibility of establishing a private forecasting organization in Wisconsin or Illinois (see Craft 1999). But previous attempts by private organizations in the United States had been unsuccessful in supporting any private weather-forecasting service. In the contemporary United States, the Federal government both collects data and offers forecasts, while private weather organizations provide a variety of customized services.

Weather Forecasting Timeline

1743

Benjamin Franklin, using reports of numerous postmasters, determined the northeastward path of a hurricane from the West Indies.

1772-1777

Thomas Jefferson at Monticello, Virginia and James Madison at Williamsburg, Virginia collect a series of contemporaneous weather observations.

1814

Surgeon General Tilton issues an order directing Army surgeons to keep a diary of the weather in order to ascertain any influences of weather upon disease.

1817

Josiah Meigs, Commission of the General Land Office, requests officials at their land offices to record meteorological observations.

1846-1848

Matthew F. Maury, Superintendent of the U.S. Naval Observatory, publishes his first charts compiled from ships’ log showing efficient sailing routes.

1847

Barometer used to issue storm warnings in Barbadoes.

1848

J. Jones of New York advertises meteorological reports costing between twelve and one half and twenty-five cents per city per day. There is no evidence the service was ever sold.

1848

Publication in the British Daily News of the first telegraphic daily weather report.

1849

The Smithsonian Institution begins a nearly three decade long project of collecting meteorological data with the goal of understanding storms.

1849

Captain Joseph Brooks, manager of the Portland Steamship Line, receives telegraphic reports three times a day from Albany, New York, and Plattsburg in order to determine if the line’s ships should remain in port in Maine.

1853-1855

Ebenezer E. Merriam of New York, using newspaper telegraphic reports, offers weather forecasts in New York’s newspapers on an apparently irregular basis.

1858

The U.S. Army Engineers begin collecting meteorological observations while surveying the Great Lakes.

1860

Christoph Buys Ballot issues first storm warnings in the Netherlands.

1861

Admiral Robert FitzRoy of the British Meteorological Office begins issuing storm-warnings.

1863

Urban Leverrier, director of the Paris Observatory, organizes a storm-warning service.

1868

Cleveland Abbe of the Cincinnati Observatory unsuccessfully proposes a weather service of one hundred observation stations to be supported by the Cincinnati Chamber of Commerce, Associated Press, Western Union, and local newspapers.

1869

The Cincinnati Chamber of Commerce funds a three-month trial of the Cincinnati Observatory’s weather bulletin. The Chicago Board of Trade declines to participate.

1869

Increase A. Lapham publishes a list of the shipping losses on the Great Lakes during the 1868 and 1869 seasons.

1870

Congress passes a joint resolution directing the Secretary of War to establish a meteorological network for the creation of storm warnings on the Great Lakes and Atlantic Seaboard. Storm-warnings are offered on November 8. Forecasts begin the following February 19.

1872

Congressional appropriations bill extends Army Signal Service duties to provide forecasts for agricultural and commercial interests.

1880

Frost warnings offered for Louisiana sugar producers.

1881-1884

Army Signal Service expedition to Lady Franklin Bay in support of international polar weather research. Only seven of the twenty-five member team survives.

1881

Special cotton-region weather reporting network established.

1891

Weather Bureau transferred to the Department of Agriculture.

1902

Daily weather forecasts sent by radio to Cunard Line steamships.

1905

First wireless weather report from a ship at sea.

1918

Norway expands its meteorological network and organization leading to the development of new forecasting theories centered on three-dimensional interaction of cold and warm fronts.

1919

American Meteorological Society founded.

1926

Air Commerce Act gives the Weather Bureau responsibility for providing weather services to aviation.

1934

First private sector meteorologist hired by a utility company.

1940

The Weather Bureau is transferred from the Department of Agriculture to the Department of Commerce.

1946

First private weather forecast companies begin service.

1960

The first meteorological satellite, Tiros I, enters orbit successfully.

1976

The United States launches its first geostationary weather satellites.

References

Abbe, Cleveland, Jr. “A Chronological Outline of the History of Meteorology in the United States.” Monthly Weather Review 37, no. 3-6 (1909): 87-89, 146- 49, 178-80, 252-53.

Alter, J. Cecil. “National Weather Service Origins.” Bulletin of the Historical and Philosophical Society of Ohio 7, no. 3 (1949): 139-85.

Anderson, Katharine. “The Weather Prophets: Science and Reputation in Victorian Meteorology.” History of Science 37 (1999): 179-216.

Burton, Jim. “Robert Fitzroy and the Early History of the Meteorological Office.” British Journal for the History of Science 19 (1986): 147-76.

Chief Signal Officer. Report of the Chief Signal Officer. Washington: GPO, 1871-1890.

Craft, Erik. “The Provision and Value of Weather Information Services in the United States during the Founding Period of the Weather Bureau with Special Reference to Transportation on the Great Lakes.” Ph.D. diss., University of Chicago, 1995.

Craft, Erik. “The Value of Weather Information Services for Nineteenth-Century Great Lakes Shipping.” American Economic Review 88, no.5 (1998): 1059-1076.

Craft, Erik. “Private Weather Organizations and the Founding of the United States Weather Bureau.” Journal of Economic History 59, no. 4 (1999): 1063- 1071.

Davis, John L. “Weather Forecasting and the Development of Meteorological Theory at the Paris Observatory.” Annals of Science 41 (1984): 359-82.

Fleming, James Rodger. Meteorology in America, 1800-1870. Baltimore: Johns Hopkins University Press, 1990.

Fleming, James Rodger, and Roy E. Goodman, editors. International Bibliography of Meteorology. Upland, Pennsylvania: Diane Publishing Co., 1994.

Friedman, Robert Marc. Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Ithaca: Cornell University Press, 1989.

Hughes, Patrick. A Century of Weather Service. New York: Gordon and Breach, 1970.

Miller, Eric R. “The Evolution of Meteorological Institutions in United States.” Monthly Weather Review 59 (1931): 1-6.

Miller, Eric R. “New Light on the Beginnings of the Weather Bureau from the Papers of Increase A. Lapham.” Monthly Weather Review 59 (1931): 65-70.

Sah, Raaj. “Priorities of Developing Countries in Weather and Climate.” World Development 7 no. 3 (1979): 337-47.

Spiegler, David B. “A History of Private Sector Meteorology.” In Historical Essays on Meteorology, 1919-1995, edited by James Rodger Fleming, 417- 41. Boston: American Meteorological Society, 1996.

Weber, Gustavus A. The Weather Bureau: Its History, Activities and Organization. New York: D. Appleton and Company, 1922.

Whitnah, Donald R. A History of the United States Weather Bureau. Urbana: University of Illinois Press, 1961.

Citation: Craft, Erik. “Economic History of Weather Forecasting”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2001. URL http://eh.net/encyclopedia/an-economic-history-of-weather-forecasting/

The History of the Radio Industry in the United States to 1940

Carole E. Scott, State University of West Georgia

The Technological Development of Radio: From Thales to Marconi

All electrically-based industries trace their ancestry back to at least 600 B.C. when the Greek philosopher Thales observed that after it is rubbed, amber (electron in Greek) attracts small objects. In 1600, William Gilbert, an Englishman, distinguished between magnetism, such as that displayed by a lodestone, and what we now call the static electricity produced by rubbing amber. In 1752, America’s multi-talented Benjamin Franklin used a kite connected to a Leyden jar during a thunderstorm to prove that a lightning flash has the same nature as static electricity. In 1831, an American, Joseph Henry, used an electromagnet to send messages by wire between buildings on Princeton’s campus. Assisted by Henry, an American artist, Samuel F. B. Morse, developed a telegraph system utilizing a key to open and close an electric circuit to transmit an intermittent signal (Morse Code) through a wire.

The possibility of transmitting messages through the air, water, or ground via low frequency magnetic waves was discovered soon after Morse invented the telegraph. Induction was the method used in the first documented “wireless telephone” demonstration by Nathan B. Stubblefield, a Kentucky farmer, in 1892. Because Stubblefield transmitted sound through the air via induction, rather than by radiation, he was not the inventor of radio.

Transmission by radiation owes its existence to the discovery in1877 of electromagnetic waves by a German, Heinrich Rudolf Hertz. Electromagnetic waves of from 10,000 cycles a second to 1,200,000,000 cycles a second are today called radio waves. Eight years after Hertz’s discovery, an American, Thomas Alva Edison, took out a patent for wireless telegraphy through the use of discontinuous radio waves. A few years later, in 1894, using a different and much superior wireless telegraphy system, an Italian, Guglielmo Marconi, used discontinuous waves to send Morse Code messages through the air for short distances over land. Later he sent them across the Atlantic Ocean. On land in Europe Marconi was stymied by laws giving government-operated postal services a monopoly on message delivery, and initially only over water was he able to transmit radio waves very far.

Several Americans transmitted speech without the benefit of wires prior to 1900. Alexander G. Bell, for example, experimented in 1890 with transmitting sound with rays of light, whose frequency exceeds that of radio waves. His test of what he called the photophone was said to be the first practical test of such a device ever made. Although Marconi is widely given the credit for being the first man to develop a successful wireless system, some believe that others, including Nicola Tesla preceded him. However, it is clear that Marconi had far more influence on the shaping of the radio industry than these men did.

The Structure of the Radio Industry before 1920: Inventor-Entrepreneurs

As had been true of earlier high-tech industries such as the telegraph and electric lighting in their formative years, what was accomplished in the early years of the radio industry was primarily brought about by inventor/entrepreneurs. None of the major electrical and telephone companies played a role in the formative years of the radio industry. So this industry’s early history is a story of individual inventors and entrepreneurs, many of whom were both inventors and entrepreneurs. However, after 1920 this industry’s history is largely one of organizations.

Scientists obtain their objective by discovering the laws of nature. Inventors, on the other hand, use the laws of nature to find a way to do something. Because they do not know or do not care what scientists’ laws say is possible, inventors will try things that scientists will not. When the creations of inventors work in seeming defiance of scientists’ work, scientists rush to the lab to find out why. Scientists thought that radio waves could not be transmitted beyond the horizon because they thought that this would require that they bend to follow the curvature of the Earth. Marconi tried transmitting beyond the horizon anyway and succeeded. A typical scientist would not have tried to do this because he knew better and his fellow scientists might laugh at him.

Marconi

Marconi may not have been visionary enough to found the radio broadcasting industry. Vision was required because, while there was already an established market for electronic, point-to-point communication, there was no existing market for broadcasting, nor could the technology for transmitting speech be as easily developed as could that for transmitting dots and dashes. In point-to-point communications radio’s disadvantage was lack of privacy. Its competitive advantage was much lower cost than transmission by wire over land and undersea cable.

Due in part to his Marconi Company’s purchase of competitors who had infringed on its patents, by the time World War I broke out, the American Marconi Company dominated the American radio market. As a result, it had no overwhelming need to develop a new service. In addition, Marconi had no surplus funds to plow into a new business. Shortly after the end of World War I, the United States’ government’s hostile attitude convinced Marconi that his British-based company had no future in America, and he agreed to sell it to the General Electric Company (GE). Marconi had wanted to create an international wireless monopoly. However, the United States government opposed the creation of a foreign-owned wireless monopoly. During World War I the United States Navy was given control of all the nation’s private wireless facilities. After the war the Navy wanted wireless to continue to be a government-controlled monopoly. Unable to achieve this, the Navy recommended that an American-owned company be established to control the manufacture and marketing of wireless in the United States. As a result, the government-sponsored Radio Corporation of America was created to take over the assets of Marconi’s American company.

The four chief players in American radio’s early years, Marconi, Canadian-born Aubrey Fessenden, Lee deForest, and John Stone Stone [sic] were all inventor/entrepreneurs. Marconi successfully exploited the interdependence among technology, business strategy, and the press. He was the only one of the four to have an adequate business strategy. Only he and deForest took full advantage of the press. However, deForest seems to have used the press more to sell stock than apparatus. Marconi was also more astute in his patent dealings than were his American competitors. For example, to protect himself from a possible patent suit, he purchased from Thomas A. Edison his patent on a system of wireless telegraphy that Edison had never used. Marconi never used it either because it was inferior to one he developed.

Fessenden

Fessenden, a very prolific inventor, first experimented with voice transmission while working for the United States Weather Bureau. In 1900 he left what is now the University of Pittsburgh, where he was head of the electrical engineering department, to develop a method for the U.S. Weather Bureau to transmit weather reports. That year, through the use of a transmitter that produced discontinuous waves, he succeeded in transmitting speech.

Although discontinuous waves would satisfactorily transmit the dots and dashes of Morse code, high quality voice and music cannot be transmitted in this way. So, in 1902, Fessenden switched to using a continuous wave, becoming the first person to transmit voice and music by this method. On Christmas Eve, 1906, Fessenden made history by broadcasting music and speech from Massachusetts that was heard as far away as the West Indies. After picking up this broadcast, the United Fruit Company purchased equipment from Fessenden to communicate with its ships. Navies and shipping companies were among those most interested in purchasing early radio equipment. During World War I armies also made significant use of radio. Important among its army uses was communicating with airplanes.

Because he did not provide a regular schedule of programming for the public, Fessenden is not usually credited with having operated the first broadcasting station. Nonetheless, he is widely recognized as the father of broadcasting because those who had gone before him had only used radio to deliver messages from one person to another. However, despite being preoccupied with laboratory work and being unsuited by temperament and experience to be a businessman, he chose to directly manage his company. It failed, and an embittered Fessenden left the radio industry.

deForest

Lee deForest, whose doctoral dissertation was about Hertzian waves, received his Ph.D. from Yale in 1896. His first job was with Western Electric. By 1902 he had started the DeForest Wireless Telegraph Company, which became insolvent in 1906. His second company, the DeForest Radio Telephone Company began to fail in 1909. In 1912 he was indicted for using the mails to defraud by promoting “a worthless device,” the Audion tube. He was acquitted. The Audion tube (later known as a triode tube) was far from being a worthless device, as it was a key component of radios so long as vacuum tubes continued to be used.

The development of a commercially viable radio broadcasting industry could not have taken place without the invention of the vacuum tube, which had its origins in Englishman Michael Faraday’s belief that an electric current could probably pass through a vacuum. (The vacuum tube’s obsolescence was the result of a study of semiconductors in 1948 by William Shockley, Walter Brattain, and John Bardeen. They discovered that the introduction of impurities into semiconductors provided a solid-state material that would not only rectify a current, but also amplify it. Transistors using this material rapidly replaced vacuum tubes. Later it became possible to etch transistors on small pieces of silicon in integrated circuits.)

In 1910, deForest broadcast, probably rather poorly, the singing of opera singer Enrico Caruso. Possibly stimulated by the American Telephone and Telegraph Company transmitting from the Navy’s Arlington, Virginia facility in 1915 radio telephone signals heard both across the Atlantic and in Honolulu, deForest resumed experimenting with broadcasting. He installed a transmitter at the Columbia Gramophone building in New York and began daily broadcasts of phonograph music sponsored by Columbia. Because in the late nineteenth century the new electrical industry had made some investors multimillionaires almost over night, Americans like deForest and his partners found easy pickings for awhile, as many people were eager to snap up the stock offered by overly optimistic inventors in this new branch of the electrical industry. The quick failure of firms whose end, rather than their means, was selling stock made life more difficult for ethical firms.

Amateur Radio

In the United States in 1913 there were 322 licensed amateur radio operators who would ultimately be relegated to the seemingly barren wasteland of the radio spectrum, short wave. By 1917 there were 13,581 amateur radio operators. At that time building a radio receiver was a fad. The typical builder was a boy or young man. Many older people thought that all radio would ever be was a fad, and certainly so long as the public had to build its own radios, put up with poor reception, and listen to dots and dashes and a few experimental broadcasts of music and speech over earphones, relatively few people were going to be interested in having a radio. Laying the groundwork for making radio a mass medium was Edwin H. Armstrong’s invention based on work he did in the U.S. Army during World War I of the super heterodyne that made it possible to replace earphones with a loudspeaker.

In 1921, the American Radio Relay league and a British amateur group assisted by Armstrong, an engineer and college professor, proved that contrary to the belief of experts, short waves can travel over long distances. Three years later Marconi, who had previously used only long waves, showed that short-wave radio waves, by bounding off the upper atmosphere, can hopscotch around the world. This discovery led to short wave radio being used for long distance radio broadcasting. (Today telephone companies use microwave relay systems for long-distance, on-shore communication through the air.)

After 1920: Large Corporations Come to Dominate the Industry

In 1919, Frank Conrad, a Westinghouse engineer, began broadcasting music in Pittsburgh. These broadcasts stimulated the sales of crystal sets. A crystal set, which could be made at home, was composed of a tuning coil, a crystal detector, and a pair of earphones. The use of a crystal eliminated the need for a battery or other electric source. The popularity of Conrad’s broadcasts led to Westinghouse establishing a radio station, KDKA, on November 2, 1920. In 1921, KDKA began broadcasting prizefights and major league baseball. While Conrad was creating KDKA, the Detroit News established a radio station. Other newspapers soon followed the Detroit newspaper’s lead.

RCA

The Radio Corporation of America (RCA) was the government-sanctioned radio monopoly formed to replace Marconi’s American company. (Later, a government that had once considered making radio a government monopoly followed a policy of promoting competition in the radio industry.) RCA was owned by a GE-dominated partnership that included Westinghouse, American Telegraph and Telephone Company (AT&T), Western Electric, United Fruit Company, and others. There were cross-licensing agreements (patent pooling) agreements between GE, AT&T, Westinghouse, and RCA, which owned the assets of Marconi’s company. Patent pooling was the solution to the problem of each company owning some essential patents.

For many years RCA and its head, David Sarnoff, were virtual synonyms. Sarnoff, who began his career in radio as a Marconi office boy, gained fame as a wireless operator and showed the great value of radio when he picked up distress messages from the sinking Titanic. Ultimately, RCA expanded into nearly every area of communications and electronics. Its extensive patent holdings gave it power over most of its competitors because they had to pay it royalties. While still working for Marconi Sarnoff had the foresight to realize that the real money in radio lay in selling radio receivers. (Because the market was far smaller, radio transmitters generated smaller revenues.)

Financing Radio Broadcasts

Marconi was able to charge people for transmitting messages for them, but how was radio broadcasting to be financed? In Europe the government financed it. In this country it soon came to be largely financed by advertising. In 1922, few stations sold advertising time. Then the motive of many operating radio stations was to advertise other businesses they owned or to get publicity. About a quarter of the nation’s 500 stations were owned by manufacturers, retailers, and other businesses, such as hotels and newspapers. Another quarter were owned by radio-related firms. Educational institutions, radio clubs, civic groups, churches, government, and the military owned 40 percent of the stations. Radio manufacturers viewed broadcasting simply as a way to sell radios. Over its first three years of selling radios, RCA’s revenues amounted to $83,500,000. By 1930 nine out of ten broadcasting stations were selling advertising time. In 1939, more than a third of the stations lost money. However, by the end of World War II only five percent were in the red. Stations’ advertising revenues came both from local and national advertisers after networks were established. By 1938, 40 percent of the nation’s 660 stations were affiliated with a network, and many were part of a chain (commonly-owned).

Radio Networks

On September 25, 1926, RCA formed the National Broadcasting Company (NBC) to take over its network broadcasting business. In early 1927 only seven percent of the nation’s 737 radio stations were affiliated with NBC. In that year a rival network whose name eventually became the Columbia Broadcasting System (CBS) was established. In 1928, CBS was purchased and reorganized by William S. Paley, a cigar company executive whose CBS career spanned more than a half-century. In 1934, the Mutual Broadcasting System was formed. Unlike NBC and CBS, it did not move into television. In 1943, the Federal Communications Commission forced NBC to sell a part of its system to Edward J. Noble, who formed the American Broadcasting Corporation (ABC). To avoid the high cost of producing radio shows, local radio stations got most of their shows other than news from the networks, which enjoyed economies of scale in producing radio programs because their costs were spread over the many stations using their programming.

The Golden Age of Radio

Radio broadcasting was the cheapest form of entertainment, and it provided the public with far better entertainment than most people were accustomed to. As a result, its popularity grew rapidly in the late 1920s and early 1930s, and by 1934, 60 percent of the nation’s households had radios. One and a half million cars were also equipped with them. The 1930s were the Golden Age of radio. It was so popular that theaters dared not open until after the extremely popular “Amos ‘n Andy” show was over.

In the thirties radio broadcasting was an entirely different genre from what it became after the introduction of television. Those who have only known the music, news, and talk radio of recent decades can have no conception of the big budget days of the thirties when radio was king of the electronic hill. Like reading, radio demanded the use of imagination. Through image-inspiring sound effects, which reached a high degree of sophistication in the thirties, radio replaced vision with visualization. Perfected during the thirties was the only new “art form” radio originated, the “soap opera,” so called because the sponsors of these serialized morality plays aimed at housewives, who were then very numerous, were usually soap companies.

The Growth of Radio

The growth of radio in the 1920s and 30s can be seen in Tables 1, 2, and 3, which give the number of stations, the amount of advertising revenue and sales of radio equipment.

Table 1
Number of Radio Stations in the US, 1921-1940

Year Number
1921 5
1922 30
1923 556
1924 530
1925 571
1926 528
1927 681
1928 677
1929 606
1930 618
1931 612
1932 604
1933 599
1934 583
1935 585
1936 616
1937 646
1938 689
1939 722
1940 765

Source: Sterling and Kittross (1978), p. 510.

Table 2
Radio Advertising Expenditures in Millions of Dollars, 1927-1940

Year Amount in Millions of $
1927 4.8
1928 14.1
1929 26.8
1930 40.5
1931 56.0
1932 61.9
1933 57.0
1934 72.8
1935 112.6
1936 122.3
1937 164.6
1938 167.1
1939 183.8
1940 215.6

Source: Sterling and Kittross (1978).

Table 3
Sales of Radio Equipment in Millions of Dollars

Year Sales in Millions of $
1922 60
1923 136
1924 358
1925 430
1926 506
1927 426
1928 651
1929 843

Source: Douglas (1987), p. 75

Impact of TV and Later Developments

The most popular drama and comedy shows and most of their stars migrated from radio to television in the 1940s and 1950s. (A few stars, like the comedy star, Fred Allen, did not successfully make the transition.) Other shows died, as radio became a medium, first, of music and news and then of call-in talk shows, music, and news. Television sets replaced the furniture-like radios that dominated the nation’s living rooms in the thirties. Point-to-point radio communication became essential for the police and trucking and other companies with similar needs. New technology made portable radio sets popular. Many decades after the loss of comedy and drama shows to television the creation of the Internet provided radio stations both with a new way to broadcast and gave then a visual component.

Government Regulation

Radio’s Property Rights Problem

Because the radio spectrum is quite different from say, a piece of real estate, radio produced a property rights problem. Originally, it was viewed as being like a navigable waterway, that is, public property. However, it wasn’t long before so many people wanted to use it that there wasn’t enough room for everyone. The only ways to deal with an excess of demand over supply are either to raise price until some potential users leave the market or to turn to rationing. The selling of the radio spectrum does not appear to have been considered. Instead, the spectrum was rationed by the government, which parceled it out to selected parties for free.

The Free-Speech Issue

Navigable waterways present no free speech problem, but radio does. Was radio to be treated like newspapers and magazines, or were broadcasters to be denied free speech? Were radio stations to be treated, like telephone companies, as common carriers, that is, anyone desiring to make use of them would have to be allowed to use them, or would they be treated like newspapers, which are under no obligation to allow all comers access to their pages? It was also established that radio stations, like newspapers, would be protected by the First Amendment

Regulation and Legislation

Government regulation of radio began in 1904 when President Theodore Roosevelt organized the Interdepartmental Board of Wireless Telegraphy. In 1910 the Wireless Ship Act was passed. That radio was to be a regulated industry was decided in 1912, when Congress passed a Radio Act that required people to obtain a license from the government in order to operate a radio transmitter. In 1924, Herbert Hoover, who was secretary of the Commerce Department, said that the radio industry was probably the only industry in the nation that was unanimously in favor of having itself regulated. Presumably, this was due both to the industry’s desire to put a stop to stations interfering with each others’ broadcasts and to limit the number of stations to a small enough number to lock in a profit. The Radio Act of 1927 solved the problem of broadcasting stations using the same frequency and the more powerful ones drowning out less powerful ones. This Act also established that radio waves are public property; therefore, radio stations must be licensed by the government. It was decided, however, not to charge stations for the use of this property.

FM Radio: Technology and Patent Suits

One method of imposing speech and music on a continuous wave requires increasing or reducing the amplitude (modulating) the distance between a radio waves peaks and troughs. This type of transmission is called amplitude modulation (AM). It appears to have first been thought of by John Stone Stone in 1892. Many years after Armstrong’s invention of the super heterodyne, he solved radio’s last major problem, static, by inventing frequency modulation (FM), which he successfully tested in 1933. A significant characteristic of FM as compared with AM is that FM stations using the same frequency do not interfere with each other. Radios simply pick up whichever FM station is the strongest. This means that low-power FM stations can operate in close proximity. Armstrong was hindered in his development of FM radio by a Federal Communications Commission (FCC) spectrum reallocation that he blamed on RCA.

Astute patent dealings were a must in the early radio industry. As was true of the rest of the electric industry, patent litigation was very common in the radio industry. One reason for the success of Marconi in America was his astute patent dealings. One of the most acrimonious radio patent suits was one between Armstrong and RCA. Armstrong expected to receive royalties on every FM radio set sold and, because FM was selected for the audio portion of TV broadcasting, he also expected royalties on every TV set sold. Some television manufacturers paid Armstrong. RCA didn’t. RCA also developed and patented a FM system different from Armstrong’s that he claimed involved no new principle. So, in 1948, he instituted a suit against RCA and NBC, charging them with willfully infringing and inducing others to infringe on his FM patents.

It was to RCA’s advantage to drag the suit out. It had more money than Armstrong did, and it could make more money until the case was settled by selling sets utilizing technology Armstrong said was his. It might be able to do this until his patents ran out. To finance the case and his research facility at Columbia, Armstrong had to sell many of his assets, including stock in Zenith, RCA, and Standard Oil. By 1954, the financial burden imposed on him forced him to try to settle with RCA. RCA’s offer did not even cover Armstrong’s remaining legal fees. Not long after he received this offer he committed suicide.

Bibliography

Aitken, Hugh G. J. The Continuous Wave: Technology and American Radio, 1900-1932. Princeton, N.J.: Princeton University Press, 1985.

Archer, Gleason Leonard. Big Business and Radio. New York, Arno Press, 1971.

Benjamin, Louise Margaret. Freedom of the Air and the Public Interest: First Amendment Rights in Broadcasting to 1935. Carbondale: Southern Illinois University Press, 2001.

Bilby, Kenneth. The General: David Sarnoff and the Rise of the Communications Industry. New York: Harper & Row, 1986.

Bittner, John R. Broadcast Law and Regulation. Englewood Cliffs, N.J.: Prentice-Hall, 1982.

Brown, Robert J. Manipulating the Ether: The Power of Broadcast Radio in Thirties America. Jefferson, N.C.: McFarland & Co., 1998.

Campbell, Robert. The Golden Years of Broadcasting: A Celebration of the First Fifty Years of Radio and TV on NBC. New York: Scribner, 1976.

Douglas, George H. The Early Years of Radio Broadcasting. Jefferson, NC: McFarland, 1987.

Douglas, Susan J. Inventing American Broadcasting, 1899-1922. Baltimore: Johns Hopkins University Press, 1987.

Erickson, Don V. Armstrong’s Fight for FM Broadcasting: One Man vs Big Business and Bureaucracy. University, AL: University of Alabama Press, 1973.

Fornatale, Peter and Joshua E. Mills. Radio in the Television Age. New York: Overlook Press, 1980.

Godfrey, Donald G. and Frederic A. Leigh, editors. Historical Dictionary of American Radio. Westport, CT: Greenwood Press, 1998.

Head, Sydney W. Broadcasting in America: A Survey of Television and Radio. Boston: Houghton Mifflin, 1956.

Hilmes, Michele. Radio Voices: American Broadcasting, 1922-1952. Minneapolis: University of Minnesota Press, 1997.

Jackaway, Gwenyth L. Media at War: Radio’s Challenge to the Newspapers, 1924-1939. Westport, CT: Praeger, 1995.

Jolly, W. P. Marconi. New York: Stein and Day, 1972.

Jome, Hiram Leonard. Economics of the Radio Industry. New York: Arno Press, 1971.

Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York: Edward Burlingame Books, 1991.

Ladd, Jim. Radio Waves: Life and Revolution on the FM Dial. New York: St. Martin’s Press, 1991.

Lichty, Lawrence Wilson and Malachi C. Topping. American Broadcasting: A Source Book on the History of Radio and Television (first edition). New York: Hastings House, 1975.

Lyons, Eugene. David Sarnoff: A Biography (first edition). New York: Harper & Row, 1966.

MacDonald, J. Fred. Don’t Touch That Dial! Radio Programming in American Life, 1920-1960. Chicago: Nelson-Hall, 1979.

Maclaurin, William Rupert. Invention and Innovation in the Radio Industry. New York: Arno Press, 1971.

Nachman, Gerald. Raised on Radio. New York: Pantheon Books, 1998.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: Greenwood Press, 1980.

Sies, Luther F. Encyclopedia of American Radio, 1920-1960. Jefferson, NC : McFarland, 2000.

Slotten, Hugh Richard. Radio and Television Regulation: Broadcast Technology in the United States, 1920-1960. Baltimore: Johns Hopkins University Press, 2000.

Smulyan, Susan. Selling Radio: The Commercialization of American Broadcasting, 1920-1934. Washington: Smithsonian Institution Press, 1994.

Sobel, Robert. RCA. New York: Stein and Day/Publishers, 1986.

Sterling, Christopher H. and John M. Kittross. Stay Tuned. Belmont, CA: Wadsworth, 1978.

Weaver, Pat. The Best Seat in the House: The Golden Years of Radio and Television. New York: Knopf, 1994.

Citation: Scott, Carole. “History of the Radio Industry in the United States to 1940″. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/the-history-of-the-radio-industry-in-the-united-states-to-1940/

Economic History of Malaysia

John H. Drabble, University of Sydney, Australia

General Background

The Federation of Malaysia (see map), formed in 1963, originally consisted of Malaya, Singapore, Sarawak and Sabah. Due to internal political tensions Singapore was obliged to leave in 1965. Malaya is now known as Peninsular Malaysia, and the two other territories on the island of Borneo as East Malaysia. Prior to 1963 these territories were under British rule for varying periods from the late eighteenth century. Malaya gained independence in 1957, Sarawak and Sabah (the latter known previously as British North Borneo) in 1963, and Singapore full independence in 1965. These territories lie between 2 and 6 degrees north of the equator. The terrain consists of extensive coastal plains backed by mountainous interiors. The soils are not naturally fertile but the humid tropical climate subject to monsoonal weather patterns creates good conditions for plant growth. Historically much of the region was covered in dense rainforest (jungle), though much of this has been removed for commercial purposes over the last century leading to extensive soil erosion and silting of the rivers which run from the interiors to the coast.

SINGAPORE

The present government is a parliamentary system at the federal level (located in Kuala Lumpur, Peninsular Malaysia) and at the state level, based on periodic general elections. Each Peninsular state (except Penang and Melaka) has a traditional Malay ruler, the Sultan, one of whom is elected as paramount ruler of Malaysia (Yang dipertuan Agung) for a five-year term.

The population at the end of the twentieth century approximated 22 million and is ethnically diverse, consisting of 57 percent Malays and other indigenous peoples (collectively known as bumiputera), 24 percent Chinese, 7 percent Indians and the balance “others” (including a high proportion of non-citizen Asians, e.g., Indonesians, Bangladeshis, Filipinos) (Andaya and Andaya, 2001, 3-4)

Significance as a Case Study in Economic Development

Malaysia is generally regarded as one of the most successful non-western countries to have achieved a relatively smooth transition to modern economic growth over the last century or so. Since the late nineteenth century it has been a major supplier of primary products to the industrialized countries; tin, rubber, palm oil, timber, oil, liquified natural gas, etc.

However, since about 1970 the leading sector in development has been a range of export-oriented manufacturing industries such as textiles, electrical and electronic goods, rubber products etc. Government policy has generally accorded a central role to foreign capital, while at the same time working towards more substantial participation for domestic, especially bumiputera, capital and enterprise. By 1990 the country had largely met the criteria for a Newly-Industrialized Country (NIC) status (30 percent of exports to consist of manufactured goods). While the Asian economic crisis of 1997-98 slowed growth temporarily, the current plan, titled Vision 2020, aims to achieve “a fully developed industrialized economy by that date. This will require an annual growth rate in real GDP of 7 percent” (Far Eastern Economic Review, Nov. 6, 2003). Malaysia is perhaps the best example of a country in which the economic roles and interests of various racial groups have been pragmatically managed in the long-term without significant loss of growth momentum, despite the ongoing presence of inter-ethnic tensions which have occasionally manifested in violence, notably in 1969 (see below).

The Premodern Economy

Malaysia has a long history of internationally valued exports, being known from the early centuries A.D. as a source of gold, tin and exotics such as birds’ feathers, edible birds’ nests, aromatic woods, tree resins etc. The commercial importance of the area was enhanced by its strategic position athwart the seaborne trade routes from the Indian Ocean to East Asia. Merchants from both these regions, Arabs, Indians and Chinese regularly visited. Some became domiciled in ports such as Melaka [formerly Malacca], the location of one of the earliest local sultanates (c.1402 A.D.) and a focal point for both local and international trade.

From the early sixteenth century the area was increasingly penetrated by European trading interests, first the Portuguese (from 1511), then the Dutch East India Company [VOC](1602) in competition with the English East India Company [EIC] (1600) for the trade in pepper and various spices. By the late eighteenth century the VOC was dominant in the Indonesian region while the EIC acquired bases in Malaysia, beginning with Penang (1786), Singapore (1819) and Melaka (1824). These were major staging posts in the growing trade with China and also served as footholds from which to expand British control into the Malay Peninsula (from 1870), and northwest Borneo (Sarawak from 1841 and North Borneo from 1882). Over these centuries there was an increasing inflow of migrants from China attracted by the opportunities in trade and as a wage labor force for the burgeoning production of export commodities such as gold and tin. The indigenous people also engaged in commercial production (rice, tin), but remained basically within a subsistence economy and were reluctant to offer themselves as permanent wage labor. Overall, production in the premodern economy was relatively small in volume and technologically undeveloped. The capitalist sector, already foreign dominated, was still in its infancy (Drabble, 2000).

The Transition to Capitalist Production

The nineteenth century witnessed an enormous expansion in world trade which, between 1815 and 1914, grew on average at 4-5 percent a year compared to 1 percent in the preceding hundred years. The driving force came from the Industrial Revolution in the West which saw the innovation of large scale factory production of manufactured goods made possible by technological advances, accompanied by more efficient communications (e.g., railways, cars, trucks, steamships, international canals [Suez 1869, Panama 1914], telegraphs) which speeded up and greatly lowered the cost of long distance trade. Industrializing countries required ever-larger supplies of raw materials as well as foodstuffs for their growing populations. Regions such as Malaysia with ample supplies of virgin land and relative proximity to trade routes were well placed to respond to this demand. What was lacking was an adequate supply of capital and wage labor. In both aspects, the deficiency was supplied largely from foreign sources.

As expanding British power brought stability to the region, Chinese migrants started to arrive in large numbers with Singapore quickly becoming the major point of entry. Most arrived with few funds but those able to amass profits from trade (including opium) used these to finance ventures in agriculture and mining, especially in the neighboring Malay Peninsula. Crops such as pepper, gambier, tapioca, sugar and coffee were produced for export to markets in Asia (e.g. China), and later to the West after 1850 when Britain moved toward a policy of free trade. These crops were labor, not capital, intensive and in some cases quickly exhausted soil fertility and required periodic movement to virgin land (Jackson, 1968).

Tin

Besides ample land, the Malay Peninsula also contained substantial deposits of tin. International demand for tin rose progressively in the nineteenth century due to the discovery of a more efficient method for producing tinplate (for canned food). At the same time deposits in major suppliers such as Cornwall (England) had been largely worked out, thus opening an opportunity for new producers. Traditionally tin had been mined by Malays from ore deposits close to the surface. Difficulties with flooding limited the depth of mining; furthermore their activity was seasonal. From the 1840s the discovery of large deposits in the Peninsula states of Perak and Selangor attracted large numbers of Chinese migrants who dominated the industry in the nineteenth century bringing new technology which improved ore recovery and water control, facilitating mining to greater depths. By the end of the century Malayan tin exports (at approximately 52,000 metric tons) supplied just over half the world output. Singapore was a major center for smelting (refining) the ore into ingots. Tin mining also attracted attention from European, mainly British, investors who again introduced new technology – such as high-pressure hoses to wash out the ore, the steam pump and, from 1912, the bucket dredge floating in its own pond, which could operate to even deeper levels. These innovations required substantial capital for which the chosen vehicle was the public joint stock company, usually registered in Britain. Since no major new ore deposits were found, the emphasis was on increased efficiency in production. European operators, again employing mostly Chinese wage labor, enjoyed a technical advantage here and by 1929 accounted for 61 percent of Malayan output (Wong Lin Ken, 1965; Yip Yat Hoong, 1969).

Rubber

While tin mining brought considerable prosperity, it was a non-renewable resource. In the early twentieth century it was the agricultural sector which came to the forefront. The crops mentioned previously had boomed briefly but were hard pressed to survive severe price swings and the pests and diseases that were endemic in tropical agriculture. The cultivation of rubber-yielding trees became commercially attractive as a raw material for new industries in the West, notably for tires for the booming automobile industry especially in the U.S. Previously rubber had come from scattered trees growing wild in the jungles of South America with production only expandable at rising marginal costs. Cultivation on estates generated economies of scale. In the 1870s the British government organized the transport of specimens of the tree Hevea Brasiliensis from Brazil to colonies in the East, notably Ceylon and Singapore. There the trees flourished and after initial hesitancy over the five years needed for the trees to reach productive age, planters Chinese and European rushed to invest. The boom reached vast proportions as the rubber price reached record heights in 1910 (see Fig.1). Average values fell thereafter but investors were heavily committed and planting continued (also in the neighboring Netherlands Indies [Indonesia]). By 1921 the rubber acreage in Malaysia (mostly in the Peninsula) had reached 935 000 hectares (about 1.34 million acres) or some 55 percent of the total in South and Southeast Asia while output stood at 50 percent of world production.

Fig.1. Average London Rubber Prices, 1905-41 (current values)

As a result of this boom, rubber quickly surpassed tin as Malaysia’s main export product, a position that it was to hold until 1980. A distinctive feature of the industry was that the technology of extracting the rubber latex from the trees (called tapping) by an incision with a special knife, and its manufacture into various grades of sheet known as raw or plantation rubber, was easily adopted by a wide range of producers. The larger estates, mainly British-owned, were financed (as in the case of tin mining) through British-registered public joint stock companies. For example, between 1903 and 1912 some 260 companies were registered to operate in Malaya. Chinese planters for the most part preferred to form private partnerships to operate estates which were on average smaller. Finally, there were the smallholdings (under 40 hectares or 100 acres) of which those at the lower end of the range (2 hectares/5 acres or less) were predominantly owned by indigenous Malays who found growing and selling rubber more profitable than subsistence (rice) farming. These smallholders did not need much capital since their equipment was rudimentary and labor came either from within their family or in the form of share-tappers who received a proportion (say 50 percent) of the output. In Malaya in 1921 roughly 60 percent of the planted area was estates (75 percent European-owned) and 40 percent smallholdings (Drabble, 1991, 1).

The workforce for the estates consisted of migrants. British estates depended mainly on migrants from India, brought in under government auspices with fares paid and accommodation provided. Chinese business looked to the “coolie trade” from South China, with expenses advanced that migrants had subsequently to pay off. The flow of immigration was directly related to economic conditions in Malaysia. For example arrivals of Indians averaged 61 000 a year between 1900 and 1920. Substantial numbers also came from the Netherlands Indies.

Thus far, most capitalist enterprise was located in Malaya. Sarawak and British North Borneo had a similar range of mining and agricultural industries in the 19th century. However, their geographical location slightly away from the main trade route (see map) and the rugged internal terrain costly for transport made them less attractive to foreign investment. However, the discovery of oil by a subsidiary of Royal Dutch-Shell starting production from 1907 put Sarawak more prominently in the business of exports. As in Malaya, the labor force came largely from immigrants from China and to a lesser extent Java.

The growth in production for export in Malaysia was facilitated by development of an infrastructure of roads, railways, ports (e.g. Penang, Singapore) and telecommunications under the auspices of the colonial governments, though again this was considerably more advanced in Malaya (Amarjit Kaur, 1985, 1998)

The Creation of a Plural Society

By the 1920s the large inflows of migrants had created a multi-ethnic population of the type which the British scholar, J.S. Furnivall (1948) described as a plural society in which the different racial groups live side by side under a single political administration but, apart from economic transactions, do not interact with each other either socially or culturally. Though the original intention of many migrants was to come for only a limited period (say 3-5 years), save money and then return home, a growing number were staying longer, having children and becoming permanently domiciled in Malaysia. The economic developments described in the previous section were unevenly located, for example, in Malaya the bulk of the tin mines and rubber estates were located along the west coast of the Peninsula. In the boom-times, such was the size of the immigrant inflows that in certain areas they far outnumbered the indigenous Malays. In social and cultural terms Indians and Chinese recreated the institutions, hierarchies and linguistic usage of their countries of origin. This was particularly so in the case of the Chinese. Not only did they predominate in major commercial centers such as Penang, Singapore, and Kuching, but they controlled local trade in the smaller towns and villages through a network of small shops (kedai) and dealerships that served as a pipeline along which export goods like rubber went out and in return imported manufactured goods were brought in for sale. In addition Chinese owned considerable mining and agricultural land. This created a distribution of wealth and division of labor in which economic power and function were directly related to race. In this situation lay the seeds of growing discontent among bumiputera that they were losing their ancestral inheritance (land) and becoming economically marginalized. As long as British colonial rule continued the various ethnic groups looked primarily to government to protect their interests and maintain peaceable relations. An example of colonial paternalism was the designation from 1913 of certain lands in Malaya as Malay Reservations in which only indigenous people could own and deal in property (Lim Teck Ghee, 1977).

Benefits and Drawbacks of an Export Economy

Prior to World War II the international economy was divided very broadly into the northern and southern hemispheres. The former contained most of the industrialized manufacturing countries and the latter the principal sources of foodstuffs and raw materials. The commodity exchange between the spheres was known as the Old International Division of Labor (OIDL). Malaysia’s place in this system was as a leading exporter of raw materials (tin, rubber, timber, oil, etc.) and an importer of manufactures. Since relatively little processing was done on the former prior to export, most of the value-added component in the final product accrued to foreign manufacturers, e.g. rubber tire manufacturers in the U.S.

It is clear from this situation that Malaysia depended heavily on earnings from exports of primary commodities to maintain the standard of living. Rice had to be imported (mainly from Burma and Thailand) because domestic production supplied on average only 40 percent of total needs. As long as export prices were high (for example during the rubber boom previously mentioned), the volume of imports remained ample. Profits to capital and good smallholder incomes supported an expanding economy. There are no official data for Malaysian national income prior to World War II, but some comparative estimates are given in Table 1 which indicate that Malayan Gross Domestic Product (GDP) per person was easily the leader in the Southeast and East Asian region by the late 1920s.

Table 1
GDP per Capita: Selected Asian Countries, 1900-1990
(in 1985 international dollars)

1900 1929 1950 1973 1990
Malaya/Malaysia1 6002 1910 1828 3088 5775
Singapore - - 22763 5372 14441
Burma 523 651 304 446 562
Thailand 594 623 652 1559 3694
Indonesia 617 1009 727 1253 2118
Philippines 735 1106 943 1629 1934
South Korea 568 945 565 1782 6012
Japan 724 1192 1208 7133 13197

Notes: Malaya to 19731; Guesstimate2; 19603

Source: van der Eng (1994).

However, the international economy was subject to strong fluctuations. The levels of activity in the industrialized countries, especially the U.S., were the determining factors here. Almost immediately following World War I there was a depression from 1919-22. Strong growth in the mid and late-1920s was followed by the Great Depression (1929-32). As industrial output slumped, primary product prices fell even more heavily. For example, in 1932 rubber sold on the London market for about one one-hundredth of the peak price in 1910 (Fig.1). The effects on export earnings were very severe; in Malaysia’s case between 1929 and 1932 these dropped by 73 percent (Malaya), 60 percent (Sarawak) and 50 percent (North Borneo). The aggregate value of imports fell on average by 60 percent. Estates dismissed labor and since there was no social security, many workers had to return to their country of origin. Smallholder incomes dropped heavily and many who had taken out high-interest secured loans in more prosperous times were unable to service these and faced the loss of their land.

The colonial government attempted to counteract this vulnerability to economic swings by instituting schemes to restore commodity prices to profitable levels. For the rubber industry this involved two periods of mandatory restriction of exports to reduce world stocks and thus exert upward pressure on market prices. The first of these (named the Stevenson scheme after its originator) lasted from 1 October 1922- 1 November 1928, and the second (the International Rubber Regulation Agreement) from 1 June 1934-1941. Tin exports were similarly restricted from 1931-41. While these measures did succeed in raising world prices, the inequitable treatment of Asian as against European producers in both industries has been debated. The protective policy has also been blamed for “freezing” the structure of the Malaysian economy and hindering further development, for instance into manufacturing industry (Lim Teck Ghee, 1977; Drabble, 1991).

Why No Industrialization?

Malaysia had very few secondary industries before World War II. The little that did appear was connected mainly with the processing of the primary exports, rubber and tin, together with limited production of manufactured goods for the domestic market (e.g. bread, biscuits, beverages, cigarettes and various building materials). Much of this activity was Chinese-owned and located in Singapore (Huff, 1994). Among the reasons advanced are; the small size of the domestic market, the relatively high wage levels in Singapore which made products uncompetitive as exports, and a culture dominated by British trading firms which favored commerce over industry. Overshadowing all these was the dominance of primary production. When commodity prices were high, there was little incentive for investors, European or Asian, to move into other sectors. Conversely, when these prices fell capital and credit dried up, while incomes contracted, thus lessening effective demand for manufactures. W.G. Huff (2002) has argued that, prior to World War II, “there was, in fact, never a good time to embark on industrialization in Malaya.”

War Time 1942-45: The Japanese Occupation

During the Japanese occupation years of World War II, the export of primary products was limited to the relatively small amounts required for the Japanese economy. This led to the abandonment of large areas of rubber and the closure of many mines, the latter progressively affected by a shortage of spare parts for machinery. Businesses, especially those Chinese-owned, were taken over and reassigned to Japanese interests. Rice imports fell heavily and thus the population devoted a large part of their efforts to producing enough food to stay alive. Large numbers of laborers (many of whom died) were conscripted to work on military projects such as construction of the Thai-Burma railroad. Overall the war period saw the dislocation of the export economy, widespread destruction of the infrastructure (roads, bridges etc.) and a decline in standards of public health. It also saw a rise in inter-ethnic tensions due to the harsh treatment meted out by the Japanese to some groups, notably the Chinese, compared to a more favorable attitude towards the indigenous peoples among whom (Malays particularly) there was a growing sense of ethnic nationalism (Drabble, 2000).

Postwar Reconstruction and Independence

The returning British colonial rulers had two priorities after 1945; to rebuild the export economy as it had been under the OIDL (see above), and to rationalize the fragmented administrative structure (see General Background). The first was accomplished by the late 1940s with estates and mines refurbished, production restarted once the labor force had been brought back and adequate rice imports regained. The second was a complex and delicate political process which resulted in the formation of the Federation of Malaya (1948) from which Singapore, with its predominantly Chinese population (about 75%), was kept separate. In Borneo in 1946 the state of Sarawak, which had been a private kingdom of the English Brooke family (so-called “White Rajas”) since 1841, and North Borneo, administered by the British North Borneo Company from 1881, were both transferred to direct rule from Britain. However, independence was clearly on the horizon and in Malaya tensions continued with the guerrilla campaign (called the “Emergency”) waged by the Malayan Communist Party (membership largely Chinese) from 1948-60 to force out the British and set up a Malayan Peoples’ Republic. This failed and in 1957 the Malayan Federation gained independence (Merdeka) under a “bargain” by which the Malays would hold political paramountcy while others, notably Chinese and Indians, were given citizenship and the freedom to pursue their economic interests. The bargain was institutionalized as the Alliance, later renamed the National Front (Barisan Nasional) which remains the dominant political grouping. In 1963 the Federation of Malaysia was formed in which the bumiputera population was sufficient in total to offset the high proportion of Chinese arising from the short-lived inclusion of Singapore (Andaya and Andaya, 2001).

Towards the Formation of a National Economy

Postwar two long-term problems came to the forefront. These were (a) the political fragmentation (see above) which had long prevented a centralized approach to economic development, coupled with control from Britain which gave primacy to imperial as opposed to local interests and (b) excessive dependence on a small range of primary products (notably rubber and tin) which prewar experience had shown to be an unstable basis for the economy.

The first of these was addressed partly through the political rearrangements outlined in the previous section, with the economic aspects buttressed by a report from a mission to Malaya from the International Bank for Reconstruction and Development (IBRD) in 1954. The report argued that Malaya “is now a distinct national economy.” A further mission in 1963 urged “closer economic cooperation between the prospective Malaysia[n] territories” (cited in Drabble, 2000, 161, 176). The rationale for the Federation was that Singapore would serve as the initial center of industrialization, with Malaya, Sabah and Sarawak following at a pace determined by local conditions.

The second problem centered on economic diversification. The IBRD reports just noted advocated building up a range of secondary industries to meet a larger portion of the domestic demand for manufactures, i.e. import-substitution industrialization (ISI). In the interim dependence on primary products would perforce continue.

The Adoption of Planning

In the postwar world the development plan (usually a Five-Year Plan) was widely adopted by Less-Developed Countries (LDCs) to set directions, targets and estimated costs. Each of the Malaysian territories had plans during the 1950s. Malaya was the first to get industrialization of the ISI type under way. The Pioneer Industries Ordinance (1958) offered inducements such as five-year tax holidays, guarantees (to foreign investors) of freedom to repatriate profits and capital etc. A modest degree of tariff protection was granted. The main types of goods produced were consumer items such as batteries, paints, tires, and pharmaceuticals. Just over half the capital invested came from abroad, with neighboring Singapore in the lead. When Singapore exited the federation in 1965, Malaysia’s fledgling industrialization plans assumed greater significance although foreign investors complained of stifling bureaucracy retarding their projects.

Primary production, however, was still the major economic activity and here the problem was rejuvenation of the leading industries, rubber in particular. New capital investment in rubber had slowed since the 1920s, and the bulk of the existing trees were nearing the end of their economic life. The best prospect for rejuvenation lay in cutting down the old trees and replanting the land with new varieties capable of raising output per acre/hectare by a factor of three or four. However, the new trees required seven years to mature. Corporately owned estates could replant progressively, but smallholders could not face such a prolonged loss of income without support. To encourage replanting, the government offered grants to owners, financed by a special duty on rubber exports. The process was a lengthy one and it was the 1980s before replanting was substantially complete. Moreover, many estates elected to switch over to a new crop, oil palms (a product used primarily in foodstuffs), which offered quicker returns. Progress was swift and by the 1960s Malaysia was supplying 20 percent of world demand for this commodity.

Another priority at this time consisted of programs to improve the standard of living of the indigenous peoples, most of whom lived in the rural areas. The main instrument was land development, with schemes to open up large areas (say 100,000 acres or 40 000 hectares) which were then subdivided into 10 acre/4 hectare blocks for distribution to small farmers from overcrowded regions who were either short of land or had none at all. Financial assistance (repayable) was provided to cover housing and living costs until the holdings became productive. Rubber and oil palms were the main commercial crops planted. Steps were also taken to increase the domestic production of rice to lessen the historical dependence on imports.

In the primary sector Malaysia’s range of products was increased from the 1960s by a rapid increase in the export of hardwood timber, mostly in the form of (unprocessed) saw-logs. The markets were mainly in East Asia and Australasia. Here the largely untapped resources of Sabah and Sarawak came to the fore, but the rapid rate of exploitation led by the late twentieth century to damaging effects on both the environment (extensive deforestation, soil-loss, silting, changed weather patterns), and the traditional hunter-gatherer way of life of forest-dwellers (decrease in wild-life, fish, etc.). Other development projects such as the building of dams for hydroelectric power also had adverse consequences in all these respects (Amarjit Kaur, 1998; Drabble, 2000; Hong, 1987).

A further major addition to primary exports came from the discovery of large deposits of oil and natural gas in East Malaysia, and off the east coast of the Peninsula from the 1970s. Gas was exported in liquified form (LNG), and was also used domestically as a substitute for oil. At peak values in 1982, petroleum and LNG provided around 29 percent of Malaysian export earnings but had declined to 18 percent by 1988.

Industrialization and the New Economic Policy 1970-90

The program of industrialization aimed primarily at the domestic market (ISI) lost impetus in the late 1960s as foreign investors, particularly from Britain switched attention elsewhere. An important factor here was the outbreak of civil disturbances in May 1969, following a federal election in which political parties in the Peninsula (largely non-bumiputera in membership) opposed to the Alliance did unexpectedly well. This brought to a head tensions, which had been rising during the 1960s over issues such as the use of the national language, Malay (Bahasa Malaysia) as the main instructional medium in education. There was also discontent among Peninsular Malays that the economic fruits since independence had gone mostly to non-Malays, notably the Chinese. The outcome was severe inter-ethnic rioting centered in the federal capital, Kuala Lumpur, which led to the suspension of parliamentary government for two years and the implementation of the New Economic Policy (NEP).

The main aim of the NEP was a restructuring of the Malaysian economy over two decades, 1970-90 with the following aims:

  1. to redistribute corporate equity so that the bumiputera share would rise from around 2 percent to 30 percent. The share of other Malaysians would increase marginally from 35 to 40 percent, while that of foreigners would fall from 63 percent to 30 percent.
  2. to eliminate the close link between race and economic function (a legacy of the colonial era) and restructure employment so that that the bumiputera share in each sector would reflect more accurately their proportion of the total population (roughly 55 percent). In 1970 this group had about two-thirds of jobs in the primary sector where incomes were generally lowest, but only 30 percent in the secondary sector. In high-income middle class occupations (e.g. professions, management) the share was only 13 percent.
  3. To eradicate poverty irrespective of race. In 1970 just under half of all households in Peninsular Malaysia had incomes below the official poverty line. Malays accounted for about 75 percent of these.

The principle underlying these aims was that the redistribution would not result in any one group losing in absolute terms. Rather it would be achieved through the process of economic growth, i.e. the economy would get bigger (more investment, more jobs, etc.). While the primary sector would continue to receive developmental aid under the successive Five Year Plans, the main emphasis was a switch to export-oriented industrialization (EOI) with Malaysia seeking a share in global markets for manufactured goods. Free Trade Zones (FTZs) were set up in places such as Penang where production was carried on with the undertaking that the output would be exported. Firms locating there received concessions such as duty-free imports of raw materials and capital goods, and tax concessions, aimed at primarily at foreign investors who were also attracted by Malaysia’s good facilities, relatively low wages and docile trade unions. A range of industries grew up; textiles, rubber and food products, chemicals, telecommunications equipment, electrical and electronic machinery/appliances, car assembly and some heavy industries, iron and steel. As with ISI, much of the capital and technology was foreign, for example the Japanese firm Mitsubishi was a partner in a venture to set up a plant to assemble a Malaysian national car, the Proton, from mostly imported components (Drabble, 2000).

Results of the NEP

Table 2 below shows the outcome of the NEP in the categories outlined above.

Table 2
Restructuring under the NEP, 1970-90

1970 1990
Wealth Ownership (%) Bumiputera 2.0 20.3
Other Malaysians 34.6 54.6
Foreigners 63.4 25.1
Employment
(%) of total
workers
in each
sector
Primary sector (agriculture, mineral
extraction, forest products and fishing)
Bumiputera 67.6 [61.0]* 71.2 [36.7]*
Others 32.4 28.8
Secondary sector
(manufacturing and construction)
Bumiputera 30.8 [14.6]* 48.0 [26.3]*
Others 69.2 52.0
Tertiary sector (services) Bumiputera 37.9 [24.4]* 51.0 [36.9]*
Others 62.1 49.0

Note: [ ]* is the proportion of the ethnic group thus employed. The “others” category has not been disaggregated by race to avoid undue complexity.
Source: Drabble, 2000, Table 10.9.

Section (a) shows that, overall, foreign ownership fell substantially more than planned, while that of “Other Malaysians” rose well above the target. Bumiputera ownership appears to have stopped well short of the 30 percent mark. However, other evidence suggests that in certain sectors such as agriculture/mining (35.7%) and banking/insurance (49.7%) bumiputera ownership of shares in publicly listed companies had already attained a level well beyond the target. Section (b) indicates that while bumiputera employment share in primary production increased slightly (due mainly to the land schemes), as a proportion of that ethnic group it declined sharply, while rising markedly in both the secondary and tertiary sectors. In middle class employment the share rose to 27 percent.

As regards the proportion of households below the poverty line, in broad terms the incidence in Malaysia fell from approximately 49 percent in 1970 to 17 percent in 1990, but with large regional variations between the Peninsula (15%), Sarawak (21 %) and Sabah (34%) (Drabble, 2000, Table 13.5). All ethnic groups registered big falls, but on average the non-bumiputera still enjoyed the lowest incidence of poverty. By 2002 the overall level had fallen to only 4 percent.

The restructuring of the Malaysian economy under the NEP is very clear when we look at the changes in composition of the Gross Domestic Product (GDP) in Table 3 below.

Table 3
Structural Change in GDP 1970-90 (% shares)

Year Primary Secondary Tertiary
1970 44.3 18.3 37.4
1990 28.1 30.2 41.7

Source: Malaysian Government, 1991, Table 3-2.

Over these three decades Malaysia accomplished a transition from a primary product-dependent economy to one in which manufacturing industry had emerged as the leading growth sector. Rubber and tin, which accounted for 54.3 percent of Malaysian export value in 1970, declined sharply in relative terms to a mere 4.9 percent in 1990 (Crouch, 1996, 222).

Factors in the structural shift

The post-independence state played a leading role in the transformation. The transition from British rule was smooth. Apart from the disturbances in 1969 government maintained a firm control over the administrative machinery. Malaysia’s Five Year Development plans were a model for the developing world. Foreign capital was accorded a central role, though subject to the requirements of the NEP. At the same time these requirements discouraged domestic investors, the Chinese especially, to some extent (Jesudason, 1989).

Development was helped by major improvements in education and health. Enrolments at the primary school level reached approximately 90 percent by the 1970s, and at the secondary level 59 percent of potential by 1987. Increased female enrolments, up from 39 percent to 58 percent of potential from 1975 to 1991, were a notable feature, as was the participation of women in the workforce which rose to just over 45 percent of total employment by 1986/7. In the tertiary sector the number of universities increased from one to seven between 1969 and 1990 and numerous technical and vocational colleges opened. Bumiputera enrolments soared as a result of the NEP policy of redistribution (which included ethnic quotas and government scholarships). However, tertiary enrolments totaled only 7 percent of the age group by 1987. There was an “educational-occupation mismatch,” with graduates (bumiputera especially) preferring jobs in government, and consequent shortfalls against strong demand for engineers, research scientists, technicians and the like. Better living conditions (more homes with piped water and more rural clinics, for example) led to substantial falls in infant mortality, improved public health and longer life-expectancy, especially in Peninsular Malaysia (Drabble, 2000, 248, 284-6).

The quality of national leadership was a crucial factor. This was particularly so during the NEP. The leading figure here was Dr Mahathir Mohamad, Malaysian Prime Minister from 1981-2003. While supporting the NEP aim through positive discrimination to give bumiputera an economic stake in the country commensurate with their indigenous status and share in the population, he nevertheless emphasized that this should ultimately lead them to a more modern outlook and ability to compete with the other races in the country, the Chinese especially (see Khoo Boo Teik, 1995). There were, however, some paradoxes here. Mahathir was a meritocrat in principle, but in practice this period saw the spread of “money politics” (another expression for patronage) in Malaysia. In common with many other countries Malaysia embarked on a policy of privatization of public assets, notably in transportation (e.g. Malaysian Airlines), utilities (e.g. electricity supply) and communications (e.g. television). This was done not through an open process of competitive tendering but rather by a “nebulous ‘first come, first served’ principle” (Jomo, 1995, 8) which saw ownership pass directly to politically well-connected businessmen, mainly bumiputera, at relatively low valuations.

The New Development Policy

Positive action to promote bumiputera interests did not end with the NEP in 1990, this was followed in 1991 by the New Development Policy (NDP), which emphasized assistance only to “Bumiputera with potential, commitment and good track records” (Malaysian Government, 1991, 17) rather than the previous blanket measures to redistribute wealth and employment. In turn the NDP was part of a longer-term program known as Vision 2020. The aim here is to turn Malaysia into a fully industrialized country and to quadruple per capita income by the year 2020. This will require the country to continue ascending the technological “ladder” from low- to high-tech types of industrial production, with a corresponding increase in the intensity of capital investment and greater retention of value-added (i.e. the value added to raw materials in the production process) by Malaysian producers.

The Malaysian economy continued to boom at historically unprecedented rates of 8-9 percent a year for much of the 1990s (see next section). There was heavy expenditure on infrastructure, for example extensive building in Kuala Lumpur such as the Twin Towers (currently the highest buildings in the world). The volume of manufactured exports, notably electronic goods and electronic components increased rapidly.

Asian Financial Crisis, 1997-98

The Asian financial crisis originated in heavy international currency speculation leading to major slumps in exchange rates beginning with the Thai baht in May 1997, spreading rapidly throughout East and Southeast Asia and severely affecting the banking and finance sectors. The Malaysian ringgit exchange rate fell from RM 2.42 to 4.88 to the U.S. dollar by January 1998. There was a heavy outflow of foreign capital. To counter the crisis the International Monetary Fund (IMF) recommended austerity changes to fiscal and monetary policies. Some countries (Thailand, South Korea, and Indonesia) reluctantly adopted these. The Malaysian government refused and implemented independent measures; the ringgitbecame non-convertible externally and was pegged at RM 3.80 to the US dollar, while foreign capital repatriated before staying at least twelve months was subject to substantial levies. Despite international criticism these actions stabilized the domestic situation quite effectively, restoring net growth (see next section) especially compared to neighboring Indonesia.

Rates of Economic Growth

Malaysia’s economic growth in comparative perspective from 1960-90 is set out in Table 4 below.

Table 4
Asia-Pacific Region: Growth of Real GDP (annual average percent)

1960-69 1971-80 1981-89
Japan 10.9 5.0 4.0
Asian “Tigers”
Hong Kong 10.0 9.5 7.2
South Korea 8.5 8.7 9.3
Singapore 8.9 9.0 6.9
Taiwan 11.6 9.7 8.1
ASEAN-4
Indonesia 3.5 7.9 5.2
Malaysia 6.5 8.0 5.4
Philippines 4.9 6.2 1.7
Thailand 8.3 9.9 7.1

Source: Drabble, 2000, Table 10.2; figures for Japan are for 1960-70, 1971-80, and 1981-90.

The data show that Japan, the dominant Asian economy for much of this period, progressively slowed by the 1990s (see below). The four leading Newly Industrialized Countries (Asian “Tigers” as they were called) followed EOF strategies and achieved very high rates of growth. Among the four ASEAN (Association of Southeast Asian Nations formed 1967) members, again all adopting EOI policies, Thailand stood out followed closely by Malaysia. Reference to Table 1 above shows that by 1990 Malaysia, while still among the leaders in GDP per head, had slipped relative to the “Tigers.”

These economies, joined by China, continued growth into the 1990s at such high rates (Malaysia averaged around 8 percent a year) that the term “Asian miracle” became a common method of description. The exception was Japan which encountered major problems with structural change and an over-extended banking system. Post-crisis the countries of the region have started recovery but at differing rates. The Malaysian economy contracted by nearly 7 percent in 1998, recovered to 8 percent growth in 2000, slipped again to under 1 percent in 2001 and has since stabilized at between 4 and 5 percent growth in 2002-04.

The new Malaysian Prime Minister (since October 2003), Abdullah Ahmad Badawi, plans to shift the emphasis in development to smaller, less-costly infrastructure projects and to break the previous dominance of “money politics.” Foreign direct investment will still be sought but priority will be given to nurturing the domestic manufacturing sector.

Further improvements in education will remain a key factor (Far Eastern Economic Review, Nov.6, 2003).

Overview

Malaysia owes its successful historical economic record to a number of factors. Geographically it lies close to major world trade routes bringing early exposure to the international economy. The sparse indigenous population and labor force has been supplemented by immigrants, mainly from neighboring Asian countries with many becoming permanently domiciled. The economy has always been exceptionally open to external influences such as globalization. Foreign capital has played a major role throughout. Governments, colonial and national, have aimed at managing the structure of the economy while maintaining inter-ethnic stability. Since about 1960 the economy has benefited from extensive restructuring with sustained growth of exports from both the primary and secondary sectors, thus gaining a double impetus.

However, on a less positive assessment, the country has so far exchanged dependence on a limited range of primary products (e.g. tin and rubber) for dependence on an equally limited range of manufactured goods, notably electronics and electronic components (59 percent of exports in 2002). These industries are facing increasing competition from lower-wage countries, especially India and China. Within Malaysia the distribution of secondary industry is unbalanced, currently heavily favoring the Peninsula. Sabah and Sarawak are still heavily dependent on primary products (timber, oil, LNG). There is an urgent need to continue the search for new industries in which Malaysia can enjoy a comparative advantage in world markets, not least because inter-ethnic harmony depends heavily on the continuance of economic prosperity.

Select Bibliography

General Studies

Amarjit Kaur. Economic Change in East Malaysia: Sabah and Sarawak since 1850. London: Macmillan, 1998.

Andaya, L.Y. and Andaya, B.W. A History of Malaysia, second edition. Basingstoke: Palgrave, 2001.

Crouch, Harold. Government and Society in Malaysia. Sydney: Allen and Unwin, 1996.

Drabble, J.H. An Economic History of Malaysia, c.1800-1990: The Transition to Modern Economic Growth. Basingstoke: Macmillan and New York: St. Martin’s Press, 2000.

Furnivall, J.S. Colonial Policy and Practice: A Comparative Study of Burma and Netherlands India. Cambridge (UK), 1948.

Huff, W.G. The Economic Growth of Singapore: Trade and Development in the Twentieth Century. Cambridge: Cambridge University Press, 1994.

Jomo, K.S. Growth and Structural Change in the Malaysian Economy. London: Macmillan, 1990.

Industries/Transport

Alavi, Rokiah. Industrialization in Malaysia: Import Substitution and Infant Industry Performance. London: Routledge, 1966.

Amarjit Kaur. Bridge and Barrier: Transport and Communications in Colonial Malaya 1870-1957. Kuala Lumpur: Oxford University Press, 1985.

Drabble, J.H. Rubber in Malaya 1876-1922: The Genesis of the Industry. Kuala Lumpur: Oxford University Press, 1973.

Drabble, J.H. Malayan Rubber: The Interwar Years. London: Macmillan, 1991.

Huff, W.G. “Boom or Bust Commodities and Industrialization in Pre-World War II Malaya.” Journal of Economic History 62, no. 4 (2002): 1074-1115.

Jackson, J.C. Planters and Speculators: European and Chinese Agricultural Enterprise in Malaya 1786-1921. Kuala Lumpur: University of Malaya Press, 1968.

Lim Teck Ghee. Peasants and Their Agricultural Economy in Colonial Malaya, 1874-1941. Kuala Lumpur: Oxford University Press, 1977.

Wong Lin Ken. The Malayan Tin Industry to 1914. Tucson: University of Arizona Press, 1965.

Yip Yat Hoong. The Development of the Tin Mining Industry of Malaya. Kuala Lumpur: University of Malaya Press, 1969.

New Economic Policy

Jesudason, J.V. Ethnicity and the Economy: The State, Chinese Business and Multinationals in Malaysia. Kuala Lumpur: Oxford University Press, 1989.

Jomo, K.S., editor. Privatizing Malaysia: Rents, Rhetoric, Realities. Boulder, CO: Westview Press, 1995.

Khoo Boo Teik. Paradoxes of Mahathirism: An Intellectual Biography of Mahathir Mohamad. Kuala Lumpur: Oxford University Press, 1995.

Vincent, J.R., R.M. Ali and Associates. Environment and Development in a Resource-Rich Economy: Malaysia under the New Economic Policy. Cambridge, MA: Harvard University Press, 1997

Ethnic Communities

Chew, Daniel. Chinese Pioneers on the Sarawak Frontier, 1841-1941. Kuala Lumpur: Oxford University Press, 1990.

Gullick, J.M. Malay Society in the Late Nineteenth Century. Kuala Lumpur: Oxford University Press, 1989.

Hong, Evelyne. Natives of Sarawak: Survival in Borneo’s Vanishing Forests. Penang: Institut Masyarakat Malaysia, 1987.

Shamsul, A.B. From British to Bumiputera Rule. Singapore: Institute of Southeast Asian Studies, 1986.

Economic Growth

Far Eastern Economic Review. Hong Kong. An excellent weekly overview of current regional affairs.

Malaysian Government. The Second Outline Perspective Plan, 1991-2000. Kuala Lumpur: Government Printer, 1991.

Van der Eng, Pierre. “Assessing Economic Growth and the Standard of Living in Asia 1870-1990.” Milan, Eleventh International Economic History Congress, 1994.

Citation: Drabble, John. “The Economic History of Malaysia”. EH.Net Encyclopedia, edited by Robert Whaples. July 31, 2004. URL http://eh.net/encyclopedia/economic-history-of-malaysia/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

The Economic History of Major League Baseball

Michael J. Haupert, University of Wisconsin — La Crosse

“The reason baseball calls itself a game is because it’s too screwed up to be a business” — Jim Bouton, author and former MLB player

Origins

The origin of modern baseball is usually considered the formal organization of the New York Knickerbocker Base Ball Club in 1842. The rules they played by evolved into the rules of the organized leagues surviving today. In 1845 they organized into a dues paying club in order to rent the Elysian Fields in Hoboken, New Jersey to play their games on a regular basis. Typically these were amateur teams in name, but almost always featured a few players who were covertly paid. The National Association of Base Ball Players was organized in 1858 in recognition of the profit potential of baseball. The first admission fee (50 cents) was charged that year for an All Star game between the Brooklyn and New York clubs. The association formalized playing rules and created an administrative structure. The original association had 22 teams, and was decidedly amateur in theory, if not practice, banning direct financial compensation for players. In reality of course, the ban was freely and wantonly ignored by teams paying players under the table, and players regularly jumping from one club to another for better financial remuneration.

The Demand for Baseball

Before there were professional players, there was a recognition of the willingness of people to pay to see grown men play baseball. The demand for baseball extends beyond the attendance at live games to television, radio and print. As with most other forms of entertainment, the demand ranges from casual interest to a fanatical following. Many tertiary industries have grown around the demand for baseball, and sports in general, including the sports magazine trade, dedicated sports television and radio stations, tour companies specializing in sports trips, and an active memorabilia industry. While not all of this is devoted exclusively to baseball, it is indicative of the passion for sports, including baseball.

A live baseball game is consumed at the same time as the last stage of production of the game. It is like an airline seat or a hotel room, in that it is a highly perishable good that cannot be inventoried. The result is that price discrimination can be employed. Since the earliest days of paid attendance teams have discriminated based on seat location, sex and age of the patron. The first “ladies day,” which offered free admission to any woman accompanied by a man, was offered by the Gotham club in 1883. The tradition would last for nearly a century. Teams have only recently begun to exploit the full potential of price discrimination by varying ticket prices according to the expected quality, date and time of the game.

Baseball and the Media

Telegraph Rights

Baseball and the media have enjoyed a symbiotic relationship since newspapers began regularly covering games in the 1860s. Games in progress were broadcast by telegraph to saloons as early as the 1890s. In 1897 the first sale of broadcast rights took place. Each team received $300 in free telegrams as part of a league-wide contract to transmit game play-by-play over the telegraph wire. In 1913 Western Union paid each team $17,000 per year over five years for the rights to broadcast the games. The movie industry purchased the rights to film and show the highlights of the 1910 World Series for $500. In 1911 the owners managed to increase that rights fee to $3500.

Radio

It is hard to imagine that Major League Baseball (MLB) teams once saw the media as a threat to the value of their franchises. But originally, they resisted putting their games on the radio for fear that customers would stay home and listen to the game for free rather than come to the park. They soon discovered that radio (and eventually television) was a source of income and free advertising, helping to attract even more fans as well as serving as an additional source of revenue. By 2002, media revenue exceeded gate revenue for the average MLB team.

Originally, local radio broadcasts were the only source of media revenue. National radio broadcasts of regular season games were added in 1950 by the Liberty Broadcasting System. The contract lasted only one year however, before radio reverted to local broadcasting. The World Series, however has been nationally broadcast since 1922. For national broadcasts, the league negotiates a contract with a provider and splits the proceeds equally among all the teams. Thus, national radio and television contracts enrich the pot for all teams on an equal basis.

In the early days of radio, teams saw the broadcasting of their games as free publicity, and charged little or nothing for the rights. The Chicago Cubs were the first team to regularly broadcast their home games, giving them away to local radio in 1925. It would be another fourteen years, however, before every team began regular radio broadcasts of their games.

Television

1939 was also the year that the first game was televised on an experimental basis. In 1946 the New York Yankees became the first team with a local television contract when they sold the rights to their games for $75,000. By the end of the century they sold those same rights for $52 million per season. By 1951 the World Series was a television staple, and by 1955 all teams sold at least some of their games to local television. In 1966 MLB followed the lead of the NFL and sold its first national television package, netting $300,000 per team. The latest national television contract paid $24 million to each team in 2002.

Table 1:

MLB Television Revenue, Ticket Prices and Average Player Salary 1964-2002

(real (inflation-adjusted) values are in 2002 dollars)

Year Total TV revenue(millions of $) Average ticket price Average player salary
nominal real nominal real nominal real
1964 $ 21.28 $ 123 $ 2.25 $13.01 $ 14,863.00 $ 85,909
1965 $ 25.67 $ 146 $ 2.29 $13.02 $ 14,341.00 $ 81,565
1966 $ 27.04 $ 149 $ 2.35 $12.95 $ 17,664.00 $ 97,335
1967 $ 28.93 $ 156 $ 2.37 $12.78 $ 19,000.00 $ 102,454
1968 $ 31.04 $ 160 $ 2.44 $12.58 $ 20,632.00 $ 106,351
1969 $ 38.04 $ 186 $ 2.61 $12.76 $ 24,909.00 $ 121,795
1970 $ 38.09 $ 176 $ 2.72 $12.57 $ 29,303.00 $ 135,398
1971 $ 40.70 $ 180 $ 2.91 $12.87 $ 31,543.00 $ 139,502
1972 $ 41.09 $ 176 $ 2.95 $12.64 $ 34,092.00 $ 146,026
1973 $ 42.39 $ 171 $ 2.98 $12.02 $ 36,566.00 $ 147,506
1974 $ 43.25 $ 157 $ 3.10 $11.25 $ 40,839.00 $ 148,248
1975 $ 44.21 $ 147 $ 3.30 $10.97 $ 44,676.00 $ 148,549
1976 $ 50.01 $ 158 $ 3.45 $10.90 $ 52,300.00 $ 165,235
1977 $ 52.21 $ 154 $ 3.69 $10.88 $ 74,000.00 $ 218,272
1978 $ 52.31 $ 144 $ 3.98 $10.96 $ 97,800.00 $ 269,226
1979 $ 54.50 $ 135 $ 4.12 $10.21 $ 121,900.00 $ 301,954
1980 $ 80.00 $ 174 $ 4.45 $9.68 $ 146,500.00 $ 318,638
1981 $ 89.10 $ 176 $ 4.93 $9.74 $ 196,500.00 $ 388,148
1982 $ 117.60 $ 219 $ 5.17 $9.63 $ 245,000.00 $ 456,250
1983 $ 153.70 $ 277 $ 5.69 $10.25 $ 289,000.00 $ 520,839
1984 $ 268.40 $ 464 $ 5.81 $10.04 $ 325,900.00 $ 563,404
1985 $ 280.50 $ 468 $ 6.08 $10.14 $ 368,998.00 $ 615,654
1986 $ 321.60 $ 527 $ 6.21 $10.18 $ 410,517.00 $ 672,707
1987 $ 349.80 $ 553 $ 6.21 $9.82 $ 402,579.00 $ 636,438
1988 $ 364.10 $ 526 $ 6.21 $8.97 $ 430,688.00 $ 622,197
1989 $ 246.50 $ 357 $ 489,539.00 $ 708,988
1990 $ 659.30 $ 907 $ 589,483.00 $ 810,953
1991 $ 664.30 $ 877 $ 8.84 $11.67 $ 845,383.00 $ 1,116,063
1992 $ 363.00 $ 465 $ 9.41 $12.05 $1,012,424.00 $ 1,296,907
1993 $ 618.25 $ 769 $ 9.73 $12.10 $1,062,780.00 $ 1,321,921
1994 $ 716.05 $ 868 $ 10.62 $12.87 $1,154,486.00 $ 1,399,475
1995 $ 516.40 $ 609 $ 10.76 $12.69 $1,094,440.00 $ 1,290,693
1996 $ 706.30 $ 810 $ 11.32 $12.98 $1,101,455.00 $ 1,263,172
1997 $ 12.06 $13.51 $1,314,420.00 $ 1,472,150
1998 $ 13.58 $14.94 $1,378,506.00 $ 1,516,357
1999 $ 14.45 $15.61 $1,726,282.68 $ 1,864,385
2000 $ 16.22 $16.87 $1,987,543.03 $ 2,067,045
2001 $1,291.06 $ 1,310 $ 17.20 $17.45 $2,343,710.00 $ 2,378,093
2002 $ 17.85 $17.85 $2,385,903.07 $ 2,385,903

Notes: 1989 and 1992 national TV data only, no local TV included. Real values are calculated using Consumer Price Index.

As the importance of local media contracts grew, so did the problems associated with them. As cable and pay per view television became more popular, teams found them attractive sources of revenue. A fledgling cable channel could make its reputation by carrying the local ball team. In a large enough market, this could result in substantial payments to the local team. These local contracts did not pay all teams, only the home team. The problem from MLB’s point of view was not the income, but the variance in that income. That variance has increased over time, and is the primary source of the gap in payrolls, which is linked to the gap in quality, which is cited as the “competitive balance problem.” In 1962 the MLB average for local media income was $640,000 ranging from a low of $300,000 (Washington) to a high of $1.2 million (New York Yankees). In 2001, the average team garnered $19 million from local radio and television contracts, but the gap between the bottom and top had widened to an incredible $51.5 million. The Montreal Expos received $536,000 for their local broadcast rights while the New York Yankees received more than $52 million for theirs. Revenue sharing has resulted in a redistribution of some of these funds from the wealthiest to the poorest teams, but the impact of this on the competitive balance problem remains to be seen.

 

Franchise values

Baseball has been about profits since the first admission fee was charged. The first professional league, the National Association, founded in 1871, charged a $10 franchise fee. The latest teams to join MLB, paid $130 million apiece for the privilege in 1998.

Early Ownership Patterns

The value of franchises has mushroomed over time. In the early part of the twentieth century, owning a baseball team was a career choice for a wealthy sportsman. In some instances, it was a natural choice for someone with a financial interest in a related business, such as a brewery, that provided complementary goods. More commonly, the operation of a baseball team was a full time occupation of the owner, who was usually one individual, occasionally a partnership, but never a corporation.

Corporate Ownership

This model of ownership has since changed. The typical owner of a baseball team is now either a conglomerate, such as Disney, AOL Time Warner, the Chicago Tribune Company, or a wealthy individual who owns a (sometimes) related business, and operates the baseball team on the side – perhaps as a hobby, or as a complementary business. This transition began to occur when the tax benefits of owning a baseball team became significant enough that they were worth more to a wealthy conglomerate than a family owner. A baseball team that can show a negative bottom line while delivering a positive cash flow can provide significant tax benefits by offsetting income from another business. Another advantage of corporate ownership is the ability to cross-market products. For example, the Tribune Company owns the Chicago Cubs, and is able to use the team as part of its television programming. If it is more profitable for the company to show income on the Tribune ledger than the Cubs ledger, then it decreases the payment made to the team for the broadcast rights to its games. If a team owner does not have another source of income, then the ability to show a loss on a baseball team does not provide a tax break on other income. One important source of the tax advantage of owning a franchise comes from the ability to depreciate player salaries. In 1935 the IRS ruled that baseball teams could depreciate the value of their player contracts. This is an anomaly since labor is not a depreciating asset.

Table 2: Comparative Prices for MLB Salaries, Tickets and Franchise Values for Selected Years

Nominal values
year Salary ($000) Average ticketprice Average franchisevalue ($millions$)
minimum mean maximum
1920 5 20 1.00 0.794
1946 11.3 18.5 1.40 2.5
1950 13.3 45 1.54 2.54
1960 3 16 85 1.96 5.58
1970 12 29.3 78 2.72 10.13
1980 30 143.8 1300 4.45 32.1
1985 60 371.2 2130 6.08 40
1991 100 851.5 3200 8.84 110
1994 109 1153 5975 10.62 111
1997 150 1370 10,800 12.06 194
2001 200 2261 22,000 18.42 286
Real values (2002 dollars)
year Salary ($000) Average ticketprice Average franchisevalue ($millions)
minimum mean maximum
1920 44.85 179.4 8.97 7.12218
1946 104.299 170.755 12.922 23.075
1950 99.351 336.15 11.5038 18.9738
1960 18.24 97.28 516.8 11.9168 33.9264
1970 55.44 135.366 360.36 12.5664 46.8006
1980 65.4 313.484 2834 9.701 69.978
1985 100.2 619.904 3557.1 10.1536 66.8
1991 132 1123.98 4224 11.6688 145.2
1994 131.89 1395.13 7229.75 12.8502 134.31
1997 168 1534.4 12096 13.5072 217.28
2001 202 2283.61 22220 18.6042 288.86

The most significant change in the value of franchises has occurred in the last decade as a function of new stadium construction. The construction of a new stadium creates additional sources of revenue for a team owner, which impacts the value of the franchise. It is the increase in the value of franchises which is the most profitable part of ownership. Eight new stadiums were constructed between 1991 and 1999 for existing MLB teams. The average franchise value for the teams in those stadiums increased twenty percent the year the new stadium opened.

 

The Market Structure of MLB and Players’ Organizations

Major League Baseball is a highly successful oligopoly of professional baseball teams. The teams have successfully protected themselves against competition from other leagues for more than 125 years. The closest call came when two rival leagues, the established National League, and a former minor league, the Western League, renamed the American League in 1900, merged in 1903 to form the structure that exists to this day. The league lost some of its power in 1976 when it lost its monopsonistic control over the player labor market, but it retains its monopolistic hold on the number and location of franchises. Now the franchise owners must share a greater percentage of their revenue with the hired help, whereas prior to 1976 they controlled how much of the revenue to divert to the players.

The owners of professional baseball teams have acted in unison since the very beginning. They conspired to hold down the salaries of players with a secret reserve agreement in 1878. This created a monopsony whereby a player could only bargain with the team that originally signed him. This stranglehold on the labor market would last a century.

The baseball labor market is one of extremes. Baseball players began their labor history as amateurs whose skills quickly became highly demanded. For some, this translated into a career. Ultimately, all players became victims of a well-organized and obstinate cartel. Players lost their ability to bargain and offer their services competitively for a century. Despite several attempts to organize and a few attempts to create additional demand for their services from outside sources, they failed to win the right to sell their labor to the employer of their choice.

Beginning of Professionalization

The first team of baseball players to be openly paid was the 1869 Redstockings of Cincinnati. Prior to that, teams were organized as amateur squads who played for the pride of their hometown, club or college. The stakes in these games were bragging rights, often a trophy or loving cup, and occasionally a cash prize put up by a benefactor, or as a wager between the teams. It was inevitable that professional players would soon follow.

The first known professional players were paid under the table. The desire to win had eclipsed the desire to observe good sportsmanship, and the first step down the slope toward full professionalization of the sport had been taken. Just a few years later, in 1869, the first professional team was established. The Redstockings are as famous for being the first professional team as they are for their record and barnstorming accomplishments. The team was openly professional, and thus served as a worthy goal for other teams, amateur, semi-professional, and professional alike. The Cincinnati squad spent the next year barnstorming across America, taking on, and defeating, all challengers. In the process they drew attention to the game of baseball, and played a key part in its growing popularity. Just two years later, the first entirely professional baseball league would be established.

National Association of Professional Baseball Players

The formation of the National Association of Professional Base Ball Players in 1871 created a different level of competition for baseball players. The professional organization, which originally included nine teams, broke away from the National Association of Base Ball Players, which used amateur players. The amateur league folded three years after the split. The league was reorganized and renamed the National League in 1876. Originally, professional teams competed to sign players, and the best were rewarded handsomely, earning as much as $4500 per season. This was good money, given that a skilled laborer might earn $1200-$1500 per year for a 60 hour work week.

This system, however, proved to be problematic. Teams competed so fiercely for players that they regularly raided each other’s rosters. It was not uncommon for players to jump from one team to another during the season for a pay increase. This not only cost team owners money, but also created havoc with the integrity of the game, as players moved among teams, causing dramatic mid-season swings in the quality of teams.

Beginning of the Reserve Clause, 1878-79

During the winter of 1878-79, team owners gathered to discuss the problem of player roster jumping. They made a secret agreement among themselves not to raid one another’s rosters during the season. Furthermore, they agreed to restrain themselves during the off-season as well. Each owner would circulate to the other owners a list of five players he intended to keep on his roster the following season. By agreement, none of the owners would offer a contract to any of these “reserved” players. Hence, the reserve clause was born. It would take nearly a century before this was struck down. In the meantime, it went from five players (about half the team) to the entire team (1883) and to a formal contract clause (1887) agreed to by the players. Owners would ultimately make such a convincing case for the necessity of the reserve clause, that players themselves testified to its necessity in the Celler Anti-monopoly Hearings in 1951.

In 1892 the minor league teams agreed to a system that allowed the National League teams to draft players from their teams. This agreement was in response to their failure to get the NL to honor their reserve clause. In other words, what was good for the goose, was not good for the gander. While NL owners agreed to honor their reserve lists among one another, they paid no such honor to the reserve lists of teams in other organized, professional leagues. They believed they were at the top of the pyramid, where all the best players should be, and therefore they would get those players when they wanted them. As part of the draft agreement, the minor league teams allowed the NL teams to select players from their roster for fixed payments. The NL sacrificed some money, but restored a bit of order to the process, not to mention eliminated expensive bidding wars among teams for the services of players from the minor league teams.

The Players League

The first revolt by the players came in 1890, when they formed their own league, called the Players League, to compete with the National League and its rival, the American Association (AA), founded in 1882. The Players League was the first and only example of a cooperative league. The league featured profit sharing with players, an abolition of unilateral contract transfers, and no reserve clause. The competing league caused a bidding war for talent, leading to salary increases for the best players. The “war” ended after just one season, when the National League and American Association agreed to allow owners of some Players League teams to buy existing franchises. The following year, the NL and AA merged by buying out four AA franchises for $130,000 and merging the other four into the National League, to form a single twelve-team circuit.

Syndicates

This proved to be an unwieldy league arrangement however, and some of the franchises proved financially unstable. In order to preserve the structure of the league and avoid bankruptcy of some teams, syndicate ownership evolved, in which owners purchased a controlling interest in two teams. This did not help the stability of the league. Instead, it became a situation in which the syndicates used one team to train young players and feed the best of them to the other team. This period in league history exhibits some of the greatest examples of disparity between the best and worst teams in the league. In 1899 the Cleveland Spiders, the poor stepsister in the Cleveland-St. Louis syndicate, would lose a record 134 out of 154 games, a level of futility that has never been equaled. In 1900 the NL reduced to eight teams, buying out four of the existing franchises (three of the original AA franchises) for $60,000.

Western League Competes with National League

Syndicate ownership was ended in 1900 as the final part of the reorganization of the NL. It also sparked the minor Western League to declare major league status, and move some teams into NL markets for direct competition (Chicago, Boston, St. Louis, Philadelphia and Manhattan). All out competition followed in 1901, complete with roster raiding, salary increases, and team jumping, much to the benefit of the players. Syndicate ownership appeared again in 1902 when the owners of the Pittsburgh franchise purchased an interest in the Philadelphia club. Owners briefly entertained the idea of turning the entire league into a syndicate, transferring players to the market where they might be most valuable. The idea was dropped, however, for fear that the game would lose credibility and result in a decrease in attendance. In 1910 syndicate ownership was formally banned, though it did occur again in 2002, when the Montreal franchise was purchased by the other 29 MLB franchises as part of a three way franchise swap involving Boston, Miami and Montreal. MLB is currently looking to sell the franchise and move it to a more profitable market.

National and American Leagues End Competition

Team owners quickly saw the light, and in 1903 they made an agreement to honor one another’s rosters. Once more the labor wars were ended, this time in an agreement that would establish the major leagues as an organization of two cooperating leagues: the National League and the American League, each with eight teams, located in the largest cities east of the Mississippi (with the exception of St. Louis), and each league honoring the reserved rosters of teams in the other. This structure would prove remarkably stable, with no changes until 1953 when the Boston Braves became the first team to relocate in half a century when they moved to Milwaukee.

Franchise Numbers and Movements

The location and number of franchises has been a tightly controlled issue for teams since leagues were first organized. Though franchise movements were not rare in the early days of the league, they have always been under the control of the league, not the individual franchise owners. An owner is accepted into the league, but may not change the location of his or her franchise without the approval of the other members of the league. In addition, moving the location of a franchise within the vicinity of another franchise requires the permission of the affected franchise. As a result, MLB franchises have been very stable over time in regard to location. The size of the league has also been stable. From the merger of the AL and NL in 1903 until 1961, the league retained the same sixteen teams. Since that time, expansion has occurred fairly regularly, increasing to its present size of 30 teams with the latest round of expansion in 1998. In 2001, the league proposed going in the other direction, suggesting that it would contract by two teams in response to an alleged fiscal crisis and breakdown in competitive balance. Those plans were postponed at least four years by the labor agreement signed in 2002.

Table 3: MLB Franchise Sales Data by Decade

Decade Average purchase price in millions (2002 dollars) Average annual rate of increase in franchise sales price Average annual rate of return on DJIA (includes capital appreciation and annual dividends) Average tenure of ownership of MLB franchisein years Number of franchise sales
1910s .585(10.35) 6 6
1920s 1.02(10.4) 5.7 14.8 12 9
1930s .673(8.82) -4.1 - 0.3 19.5 4
1940s 1.56(15.6) 8.8 10.8 15.5 11
1950s 3.52(23.65) 8.5 16.7 13.5 10
1960s 7.64(43.45) 8.1 7.4 16 10
1970s 12.62(41.96) 5.1 7.7 10 9
1980s 40.7(67.96) 12.4 14.0 11 12
1990s 172.71(203.68) 15.6 12.6 15.8 14

Note: 2002 values calculated using the Consumer Price Index for decade midpoint

Negro Leagues

Separate professional leagues for African Americans existed, since they were excluded from participating in MLB until 1947 when Jackie Robinson broke the color barrier. The first was formed in 1920, and the last survived until 1960, though their future was doomed by the integration of the major and minor leagues.

Relocations

As revenues dried up or new markets beckoned due to shifts in population and the decreasing cost of trans-continental transportation, franchises began relocating in the second half of the twentieth century. The period from 1953-1972 saw a spate of franchise relocation: teams moved to Kansas City, Minneapolis, Baltimore, Los Angeles, Oakland, Dallas and San Francisco in pursuit of new markets. Most of these moves involved one team moving out of a market it shared with another team. The last team to relocate was the Washington D.C. franchise, which moved to suburban Dallas in 1972. It was the second time in just over a decade that a franchise had moved from the nation’s capitol. The original franchise, a charter member of the American League, had moved to Minneapolis in 1961. While there have been no relocations since then, there have been plenty of examples of threats to relocate. The threat to relocate has frequently been used by a team trying to get a new stadium built with public financing.

There were still a couple of challenges to the reserve clause. Until the 1960s, these came in the form of rival leagues creating competition for players, not a challenge to the legality of the reserve clause.

Federal League and the 1922 Supreme Court Antitrust Exemption

In 1914 the Federal League debuted. The new league did not recognize the reserve clause of the existing leagues, and raided their rosters, successfully luring some of the best players to the rival league with huge salary increases. Other players benefited from the new competition, and were able to win handsome raises from their NL and AL employers in return for not jumping leagues. The Federal League folded after two seasons when some of the franchise owners were granted access to the major leagues. No new teams were added, but a few owners were allowed to purchase existing NL and AL teams.

The first attack on the organizational structure of the major leagues to reach the US Supreme Court occurred when the shunned owner of the Baltimore club of the Federal League sued major league baseball for violation of antitrust law. Federal Baseball Club of Baltimore v the National League eventually reached the Supreme Court, where in 1922 the famous decision that baseball was not interstate commerce, and therefore was exempt from antitrust laws was rendered.

Early Strike and Labor Relations Problems

The first player strike actually occurred in 1912. The Detroit Tigers, in a show of unison for their embattled star Ty Cobb, refused to play in protest of what they regarded as an unfair suspension of Cobb, refusing to take the field unless the suspension was lifted. When warned that the team faced the prospect of a forfeit and a $5000 fine if they did not field a team, owner Frank Navin recruited local amateur players to suit up for the Tigers. The results were not surprising: a 24-2 victory for the visiting Philadelphia Athletics.

This was not an organized strike against the system per se, but it was indicative of the problems existent in the labor relations between players and owners. Cobb’s suspension was determined by the owner of the team, with no chance for a hearing for Cobb, and with no guidance from any existing labor agreement regarding suspensions. The owner was in total control, and could mete out whatever punishment for whatever length he deemed appropriate.

Mexican League

The next competing league appeared in 1946 from an unusual source: Mexico. Again, as in previous league wars, the competition benefited the players. In this case the players who benefited most were those players who were able to use Mexican League offers as leverage to gain better contracts from their major league teams. Those players who accepted offers from Mexican League teams would ultimately regret it. The league was under-financed, the playing and travel conditions far below major league standards, and the wrath of the major leagues deep. When the first paychecks were missed, the players began to head back to the U.S. However, they found no jobs waiting for them. Major League Baseball Commissioner Happy Chandler blacklisted them from the league. This led to a lawsuit, Gardella v MLB. The case was eventually settled out of court after a Federal Appeals court sided with Danny Gardella in 1949. Gardella was one of the blacklisted players who sued MLB for restraint of trade after being prevented from returning to the league after accepting a Mexican League offer for the 1946 season. While many of the players ultimately returned to the major leagues, they lost several years of their careers in the process.

Player Organizations

The first organization of baseball players came in 1885, in part a response to the reserve clause enacted by owners. The National Brotherhood of Professional Base Ball Players was not particularly successful however. In fact, just two years later, the players agreed to the reserve clause, and it became a part of the standard players contract for the next 90 years.

In 1900 another player organization was founded, the Players Protective Association. Competition broke out the next year, when the Western League declared itself a major league, and became the American League. It would merge with the National League for the 1903 season, and the brief period of roster raiding and increasing player salaries ended, as both leagues agreed to recognize one another’s rosters and reserve clauses. The Players Protective Association faded into obscurity amid the brief period of increased competition and player salaries.

Failure and Consequences of the American Baseball Guild

In 1946 the foundation was laid for the current Major League Baseball Player’s Association (MLBPA). Labor lawyer Robert Murphy created the American Baseball Guild, a player’s organization, after holding secret talks with players. Ultimately, the players voted not to form a union, and instead followed the encouragement of the owners, and formed their own committee of player representatives to bargain directly with the owners. The outcome of the negotiations was changes in the standard labor contract. Up to this point, the contract had been pretty much dictated by the owners. It contained such features as the right to waive a player with only ten days notice, the right to unilaterally decrease salary from one year to the next by any amount, and of course the reserve clause.

The players did not make major headway with the owners, but they did garner some concessions. Among them were a maximum pay cut of 25%, a minimum salary of $5000, a promise by the owners to create a pension plan, and $25 per week in living expenses for spring training camp. Until 1947, players received only expense money for spring training, no salary. The players today, despite their multimillion-dollar contracts, still receive “Murphy money” for spring training as well as a meal allowance for each day they are on the road traveling with the club.

Facing eight antitrust lawsuits in 1950, MLB requested Congress to pass a general immunity bill for all professional sports leagues. The request ultimately led to MLB’s inclusion in the Celler Anti-monopoly hearings in 1951. However, no legislative action was recommended. In fact, the owners by this time had so thoroughly convinced the players of the necessity of the reserve clause to the very survival of MLB that several players testified in favor of the monopsonistic structure of the league. They cited it as necessary to maintain the competitive balance among the teams that made the league viable. In 1957 the House Antitrust Subcommittee revisited the issue, once again recommending no change in the status quo.

Impacts of the Reserve Clause

Simon Rottenberg was the first economist to seriously look into professional baseball with the publication of his classic 1956 article “The Baseball Players’ Labor Market.” His conclusion, not surprisingly, was that the reserve clause transferred wealth from the players to owners, but had only a marginal impact on where the best players ended up. They would end up playing for the teams in the market in the best position to exploit their talents for the benefit of paying customers – in other words, the biggest markets: primarily New York. Given the quality of the New York teams (one in Manhattan, one in the Bronx and one in Brooklyn) during the era of Rottenberg’s study, his conclusion seems rather obvious. During the decade preceding his study, the three New York teams consistently performed better than their rivals. The New York Yankees won eight of ten American League pennants, and the two National League New York entries won eight of ten NL pennants (six for the Brooklyn Dodgers, two for the New York Giants).

Foundation of the Major League Baseball Players Association

The current players organization, the Major League Baseball Players Association, was formed in 1954. It remained in the background, however, until the players hired Marvin Miller in 1966 to head the organization. Hiring Miller, a former negotiator for the U.S. steel workers, would turn out to be a stroke of genius. Miller began with a series of small gains for players, including increases in the minimum salary, pension contributions by owners and limits to the maximum salary reduction owners could impose. The first test of the big item – the reserve clause – reached the Supreme Court in 1972.

Free Agency, Arbitration and the Reserve Clause

Curt Flood

Curt Flood, a star player for the St. Louis Cardinals, had been traded to the Philadelphia Phillies in 1970. Flood did not want to move from St. Louis, and informed both teams and the commissioner’s office that he did not intend to leave. He would play out his contract in St. Louis. Commissioner Bowie Kuhn ruled that Flood had no right to act in this way, and ordered him to play for Philadelphia, or not play at all. Flood chose the latter and sued MLB for violation of antitrust laws. The case reached the Supreme Court in 1972, and the court sided with MLB in Flood v. Kuhn. The court acknowledged that the 1922 ruling that MLB was exempt from antitrust law was an anomaly and should be overturned, but it refused to overturn the decision itself, arguing instead that if Congress wanted to rectify this anomaly, they should do so. Therefore the court stood pat, and the owners felt the case was settled permanently: the reserve clause had once again withstood legal challenge. They could not, however, have been more badly mistaken. While the reserve clause never has been overturned in a court of law, it would soon be drastically altered at the bargaining table, and ultimately lead to a revolution in the way baseball talent is dispersed and revenues are shared in the professional sports industry.

Curt Flood lost the legal battle, but the players ultimately won the war, and are no longer restrained by the reserve clause beyond the first two years of their major league contract. In a series of labor market victories beginning in the wake of the Flood decision in 1972 and continuing through the rest of the century, the players won the right to free agency (i.e. to bargain with any team for their services) after six years of service, escalating pension contributions, salary arbitration (after two to three seasons, depending on their service time), individual contract negotiations with agent representatives, hearing committees for disciplinary actions, reductions in maximum salary cuts, increases in travel money and improved travel conditions, the right to have disputes between players and owners settled by an independent arbitrator, and a limit to the number of times their contract could be assigned to a minor league team. Of course the biggest victory was free agency.

Impact of Free Agency – Salary Gains

The right to bargain with other teams for their services changed the landscape of the industry dramatically. No longer were players shackled to one team forever, subject to the whims of the owner for their salary and status. Now they were free to bargain with any and all teams. The impact on salaries was incredible. The average salary skyrocketed from $45,000 in 1975 to $289,000 in 1983.

Table 4: Maximum and Average MLB Player Salaries by Decade

(real values in 2002 dollars)

Period Highest Salary Year Player Team Average Salary Notes
Nominal Real Nominal Real
1800s $ 12,500.00
$ 246,250.00
1892 King Kelly Boston NL $ 3,054
$ 60,163.80
22 observations
1900s $ 10,000.00
$ 190,000.00
1907 Honus Wagner Pittsburgh Pirates $ 6,523
$ 123,937.00
13 observations
1910s $ 20,000.00
$ 360,000.00
1913 Frank Chance New York Yankees $ 2,307
$ 41,526.00
339 observations
1920s $ 80,000.00
$ 717,600.00
1927 Ty Cobb Philadelphia Athletics $ 6,992
$ 72,017.60
340 observations
1930s $ 84,098.33 $899,852 1930 Babe Ruth New York Yankees $ 7,748
$ 82,903.60
210 observations
1940s $ 100,000.00 $755,000 1949 Joe DiMaggio New York Yankees $ 11,197
$ 84,537.35
Average salary calculated using 1949 and 1943 seasons plus 139 additional observations.
1950s $ 125,000.00 $772,500 1959 Ted Williams Boston Red Sox $ 12,340
$ 76,261.20
Average salary estimate based on average of 1949 and 1964 salaries.
1960s $ 111,000.00
$572,164.95
1968 Curt Flood St. Louis Cardinals $ 18,568
$95,711.34
624 observations
1970s $ 561,500.00
$1,656,215.28
1977 Mike Schmidt Philadelphia Phillies $ 55,802
$164,595.06
2208 observations
1980s $ 2,766,666.00
$4,006,895.59
1989 Orel Hershiser, Frank Viola Dodgers, Twins $ 333,686
$483,269.38
approx 6500 observations
1990s $11,949,794.00
$ 12,905,777.52
1999 Albert Belle Baltimore Orioles $1,160,548
$ 1,253,391.84
approx 7000 observations
2000s $22,000,000.00
$22,322,742.55
2001 Alex Rodriguez Texas Rangers $2,165,627
$2,197,397.00
2250 observations

Real values based on 2002 Consumer Price Index.

Over the long haul, the changes have been even more dramatic. The average salary increased from $45,000 in 1975 to $2.4 million in 2002, while the minimum salary increased from $6000 to $200,000 and the highest paid player increased from $240,000 to $22 million. This is a 5200% increase in the average salary. Of course, not all of that increase is due to free agency. Revenues increased during this period by nearly 1800% from an average of $6.4 million to $119 million, primarily due to the 2800% increase in television revenue over the same period. Ticket prices increased by 439% while attendance doubled (the number of MLB teams increased from 24 to 30).

Strikes and Lockouts

Miller organized the players and unified them as no one had done before. The first test of their resolve came in 1972, when the owners refused to bargain on pension and salary issues. The players responded by going out on the first league-wide strike in American professional sports history. The strike began during spring training, and carried on into the season. The owners finally conceded in early April after nearly 100 games were lost to the strike. The labor stoppage became the favorite weapon of the players, who would employ it again in 1981, 1985, and 1994. The latter strike cancelled the World Series for the first time since 1904, and carried on into the 1995 season. The owners preempted strikes in two other labor disputes, locking out the players in 1976 and 1989. After each work stoppage, the players won the concessions they demanded and fended off attempts by owners to reverse previous player gains, particularly in the areas of free agency and arbitration. From the first strike in 1972 through 1994, every time the labor agreement between the two sides expired, a work stoppage ensued. In August of 2002 that pattern was broken when the two sides agreed to a new labor contract for the first time without a work stoppage.

Catfish Hunter

The first player to become a free agent did so due to a technicality. In 1974 Catfish Hunter, a pitcher for the Oakland Athletics, negotiated a contract with the owner, Charles Finley, which required Finley to make a payment into a trust fund for Hunter on a certain date. When Finley missed the date, and then tried to pay Hunter directly instead of honoring the clause, Hunter and Miller filed a complaint charging the contract should be null and void because Finley had broken it. The case went to an arbitrator who sided with Hunter and voided the contract, making Hunter a free agent. In a bidding frenzy, Hunter ultimately signed what was then a record contract with the New York Yankees. It set precedents for both its length – five years guaranteed, and its annual salary of $750,000. Prior to the dawning of free agency, it was a rare circumstance for a player to get anything more than a one-year contract, and a guaranteed contract was virtually unheard of. If a player was injured or fell off in performance, an owner would slash his salary or release him and vacate the remainder of his contract.

The End of the Reserve Clause – Messersmith and McNally

The first real test of the reserve clause came in 1975, when, on the advice of Miller, Andy Messersmith played the season without signing a contract. Dave McNally also refused to sign a contract, though he had unofficially retired at the time. Up to this time, the reserve clause meant that a team could renew a player’s contract at their discretion. The only changes in this clause that occurred since 1879 were the maximum amount by which the owner could reduce the player’s salary. In order to test the clause, which allowed teams to maintain contractual rights to players in perpetuity, Messersmith and McNally refused to sign contracts. Their teams automatically renewed their contracts from the previous season, per the reserve clause. The argument the players put forth was that if no contract was signed, then there was no reserve clause. They argued that Messersmith and McNally would be free to negotiate with any team at the end of the season. The reserve clause was struck down by arbitrator Peter Seitz on Dec. 23, 1975, clearing the way for players to become free agents and sell their services to the highest bidder. Messersmith and McNally became the first players to challenge and successfully escape the reserve clause. The baseball labor market changed permanently and dramatically in favor of the players, and has never turned back.

Current Labor Arrangements

The baseball labor market as it exists today is a result of bargaining between owners and players. Owners ultimately conceded the reserve clause and negotiated a short period of exclusivity for a team with a player. The argument they put forward was that the cost of developing players was so high, they needed a window of time when they could recoup those investments. The existing situation allows them six years. A player is bound to his original team for the first six years of his MLB contract, after which he can become a free agent – though some players bargain away that right by signing long-term contracts before the end of their sixth year.

During that six-year period however, players are not bound to the salary whims of the owners. The minimum salary will rise to $300,000 in 2003, there is a 10% maximum salary cut from one year to the next, and after two seasons players are eligible to have their contract decided by an independent arbitrator if they cannot come to an agreement with the team.

Arbitration

After their successful strike in 1972, the players had increased their bargaining position substantially. The next year they claimed a major victory when the owners agreed to a system of salary arbitration for players who did not yet qualify for free agency. Arbitration was won by the players at in 1973, and has since proved to be one of the costliest concessions the owners ever made. Arbitration requires each side to submit a final offer to an arbitrator, who must then choose one or the other offer. The arbitrator may not compromise on the offers, but must choose one. Once chosen, both sides are then obligated to accept that contract.

Once eligible for arbitration, a player, while not a free agent, does stand to reap a financial windfall. If a player and owner (realistically, a player’s agent and the owner’s agent – the general manager) cannot agree on a contract, either side may file for arbitration. If the other does not agree to go to arbitration, then the player becomes a free agent, and may bargain with any team. If arbitration is accepted, then both sides are bound to accept the contract awarded by the arbitrator. In practice, most of the contracts are settled before they reach the arbitrator. A player will file for arbitration, both sides will submit their final contract offers to the arbitrator, and then will usually settle somewhere in between the final offers. If they do not settle, then the arbitrator must hear the case and make a decision. Both sides will argue their point, which essentially boils down to comparing the player to other players in the league and their salaries. The arbitrator then decides which of the two final offers is closer to the market value for that player, and picks that one.

Collusion under Ueberroth

The owners, used to nearly a century of one-sided labor negotiations, quickly grew tired of the new economics of the player labor markets. They went through a series of labor negotiators, each one faring as poorly as the next, until they hit upon a different solution. Beginning in 1986, under the guidance of commissioner Peter Ueberroth, they tried collusion to stem the increase in player salaries. Teams agreed not to bid on one another’s free agents. The strategy worked, for awhile. During the next two seasons, player salaries grew at lower rates and high profile free agents routinely had difficulty finding anybody interested in their services. The players filed a complaint, charging the owners with a violation of the labor agreement signed by owners and players in 1981, which prohibited collusive action. They filed separate collusion charges for each of the three seasons from 1985-87, and won each time. The ruling resulted in the voiding of the final years of some players contracts, thus awarding them “second look” free agency status, and levied fines in excess of $280 million dollars on the owners. The result was a return to unfettered free agency for the players, a massive financial windfall for the impacted players, a black eye for the owners, and the end of the line for Commissioner Ueberroth.

Table 5:

Average MLB Payroll as a Percentage of Total Team Revenues for Selected Years

Year Percentage
1929 35.3
1933 35.9
1939 32.4
1943 24.8
1946 22.1
1950 17.6
1974 20.5
1977 25.1
1980 39.1
1985 39.7
1988 34.2
1989 31.6
1990 33.4
1991 42.9
1992 50.7
1994 60.5
1997 53.6
2001 54.1

Exploitation Patterns

Economist Andrew Zimbalist calculated the degree of market exploitation for baseball players for the years 1986-89, a decade after free agency began, and during the years of collusion, using a measure of the marginal revenue product of players. The marginal revenue product of a player is a measure of the additional revenue a team receives due to the addition of that player to the team. This is done by calculating the impact of the player on the performance of the team, and the subsequent impact of team performance on total revenue. He found that on average, the degree of exploitation, as measured by the ratio of marginal revenue product to salary, declined each year, from 1.32 in 1986 to 1.01 in 1989. The degree of exploitation, however, was not uniform across players. Not surprisingly, it decreased as players obtained the leverage to bargain. The degree of exploitation was highest for players in their first two years, before they were arbitration eligible, fell for players in the 2-5 year category, between arbitration and free agency, and disappeared altogether for players with six or more years of experience. In fact, for all four years, Zimbalist found that this group of players was overpaid with an average MRP of less than 75% of salary in 1989. No similar study has been done for players before free agency, in part due to the paucity of salary data before this time.

Negotiations under the Reserve Clause

Player contracts have changed dramatically since free agency. Players used to be subject to whatever salary the owner offered. The only recourse for a player was to hold out for a better salary. This strategy seldom worked, because the owner had great influence on the media, and usually was able to turn the public against the player, adding another source of pressure on the player to sign for the terms offered by the team. The pressure of no payday – a payday that, while less than the player’s MRP, still exceeded his opportunity cost by a fair amount, was usually sufficient to minimize the length of most holdouts. The owner influenced the media because the sports reporters were actually paid by the teams in cash or in kind, traveled with them, and enjoyed a relatively luxurious lifestyle for their chosen occupation. A lifestyle that could be halted by edict of the team at any time. The team controlled media passes and access and therefore had nearly total control of who covered the team. It was a comfortable lifestyle for a reporter, and spreading owner propaganda on occasion was seldom seen as an unacceptable price to pay.

Recent Concerns

The major labor issue in the game has shifted from player exploitation, the cry until free agency was granted, to competitive imbalance. Today, critics of the salary structure point to its impact on the competitive balance of the league as a way of criticizing the rising payrolls. Many fans of the game openly pine for a return for “the good old days,” when players played for the love of the game. It should be recognized however, that the game has always been a business. All that has changed has been the amount of money at stake and how it is divided among the employers and their employees.

Suggested Readings

A Club Owner. “The Baseball Trust.” Literary Digest, December 7, 1912.

Burk, Robert F. Much More Than a Game: Players, Owners, and American Baseball since 1921. Chapel Hill: University of North Carolina Press, 2001.

Burk, Robert F. Never Just a Game: Players, Owners, and American Baseball to 1920. Chapel Hill: University of North Carolina Press, 1994.

Dworkin, James B. Owners versus Players: Baseball and Collective Bargaining. Dover, MA: Auburn House, 1981.

Haupert, Michael, baseball financial database.

Haupert, Michael and Ken Winter. “Pay Ball: Estimating the Profitability of the New York Yankees 1915-37.” Essays in Economic and Business History 21 (2002).

Helyar, John. Lords of the Realm: The Real History of Baseball. New York: Villard Books, 1994.

Korr, Charles. The End of Baseball as We Knew It: The Players Union, 1960-1981. Champagne: University of Illinois Press, 2002.

Kuhn, Bowie, Hardball: The Education of a Baseball Commissioner. New York: Times Books, 1987.

Lehn, Ken. “Property Rights, Risk Sharing, and Player Disability in Major League Baseball.” Journal of Law and Economics 25, no. 2 (October1982): 273-79.

Lowe, Stephen. The Kid on the Sandlot: Congress and Professional Sports, 1910-1992. Bowling Green: Bowling Green University Press, 1995.

Lowenfish, Lee. “A Tale of Many Cities: The Westward Expansion of Major League Baseball in the 1950s.” Journal of the West 17 (July 1978).

Lowenfish, Lee. “What Were They Really Worth?” The Baseball Research Journal 20 (1991): 81-2.

Lowenfish, Lee. The Imperfect Diamond: A History of Baseball’s Labor Wars. New York: Da Capo Press, 1980.

Miller, Marvin. A Whole Different Ball Game: The Sport and Business of Baseball. New York: Birch Lane Press, 1991.

Noll, Roger G. and Andrew S. Zimbalist, editors. Sports Jobs and Taxes: Economic Impact of Sports Teams and Facilities. Washington, D.C.: Brookings Institute, 1997.

Noll, Roger, editor. Government and the Sports Business. Washington, D.C.: Brookings Institution, 1974.

Okkonen, Mark. The Federal League of 1914-1915: Baseball’s Third Major League. Cleveland: Society of American Baseball Research, 1989.

Orenstein, Joshua B. “The Union Association of 1884: A Glorious Failure.” The Baseball Research Journal 19 (1990): 3-6.

Pearson, Daniel M. Baseball in 1889: Players v Owners. Bowling Green, OH: Bowling Green State University Popular Press, 1993.

Quirk, James. “An Economic Analysis of Team Movements in Professional Sports.” Law and Contemporary Problems 38 (Winter-Spring 1973): 42-66.

Rottenberg, Simon. “The Baseball Players’ Labor Market.” Journal of Political Economy 64, no. 3 (December 1956) 242-60.

Scully, Gerald. The Business of Major League Baseball. Chicago: University of Chicago Press, 1989.

Sherony, Keith, Michael Haupert and Glenn Knowles. “Competitive Balance in Major League Baseball: Back to the Future.” Nine: A Journal of Baseball History & Culture 9, no. 2 (Spring 2001): 225-36.

Sommers, Paul M., editor. Diamonds Are Forever: The Business of Baseball. Washington, D.C.: Brookings Institution, 1992.

Sullivan, Neil J. The Diamond in the Bronx: Yankee Stadium and the Politics of New York. New York: Oxford University Press, 2001.

Sullivan, Neil J. The Diamond Revolution. New York: St. Martin’s Press, 1992.

Sullivan, Neil J. The Dodgers Move West. New York: Oxford University Press, 1987.

Thorn, John and Peter Palmer, editors. Total Baseball. New York: HarperPerennial, 1993.

Voigt, David Q. The League That Failed, Lanham, MD: Scarecrow Press, 1998.

White, G. Edward. Creating the National Pastime: Baseball Transforms Itself, 1903-1953. Princeton: Princeton University Press, 1996.

Wood, Allan. 1918: Babe Ruth and the World Champion Boston Red Sox. New York: Writers Club Press, 2000.

Zimbalist, Andrew. Baseball and Billions. New York: Basic Books, 1992.

Zingg, Paul, “Bitter Victory: The World Series of 1918: A Case Study in Major League Labor-Management Relations.” Nine: A Journal of Baseball History and Social Policy Perspectives 1, no. 2 (Spring 1993): 121-41.

Zweig, Jason, “Wild Pitch: How American Investors Financed the Growth of Baseball.” Friends of Financial History 43 (Summer 1991).

Citation: Haupert, Michael. “The Economic History of Major League Baseball”. EH.Net Encyclopedia, edited by Robert Whaples. December 3, 2007. URL
http://eh.net/encyclopedia/the-economic-history-of-major-league-baseball/

Business and Industry in Nazi Germany

Author(s):Kobrak, Christopher
Hansen, Per H.
Nicosia, Francis R.
Huener, Jonathan
Reviewer(s):Ferguson, Thomas

Published by EH.NET (April 2006)

?

Christopher Kobrak and Per H. Hansen, editors, European Business, Dictatorship, and Political Risk, 1920-1945. New York: Berghahn Books, 2004. xiv + 261 pp. $60 (hardback), ISBN: 1-57181-629-1

and

Francis R. Nicosia and Jonathan Huener, editors, Business and Industry in Nazi Germany. New York: Berghahn Books, 2004. viii + 211 pp. $25 (paperback), ISBN: 1-57181-654-2.

Reviewed for EH.NET by Thomas Ferguson, Department of Political Science, University of Massachusetts, Boston.

There is a mix of both good news and bad news to report about these two works, which are natural complements and can hardly fail to interest many readers.

First the good news. Business and Industry in Nazi Germany contains six essays by some of the best known historians working in the area. Gerald Feldman’s opening essay, “Financial Institutions in Nazi Germany: Reluctant or Willing Collaborators?” draws on his earlier studies of German banks and insurance companies. As before, he is highly critical of analysts whom he believes over-emphasize alignments between the Nazis and big business. This time, however, near the end of his essay, he strikes an oddly discordant note: “When one digs deeply enough, one discovers that financial institutions were part of the network of governmental and private institutions engaged in Germany’s imperial and racial goals” (p.33). This is eerily similar to Fritz Fischer’s basic point.

Harold James departs from roughly the same premises as Feldman. But he does not perceive much “higher unity” between the Nazis and bankers. Instead, his elegantly written study portrays the Third Reich’s money men as glumly anticipating an eventual triumph of the Nazi Party’s “populist wing”: “In conscious or unconscious calculations of how adaptation to the ‘New Germany’ would affect the financial and social standing of bankers, most financiers could only come to the conclusion that whatever happened, they were bound to lose as representatives of a world and a style of business that the new regime had declared to be obsolete and discredited” (p. 61).

Peter Hayes’s brief but fluently written essay makes similar points. He compares I.G. Farben and Degussa, on which he has written separate full length studies. While he is clear that neither scrupled at profiting from the Holocaust and acknowledges personal “inadequacies” of the two concerns’ leaders, he asserts that “well before … 1937″ Nazi dominance left businesses “almost incapable of asserting their own interests against those of the state” (p. 70).

Michael Thad Allen contributes a striking essay on the SS’s organization of concentration camps. In contrast to analysts who have seen theses ventures as driven by a search for profit, he disparages purely “business” motivations in favor of more ideological considerations. But he is explicit that the explosive growth in the use of slave laborers by German corporations came at the instigation of the companies, not the SS, as employers felt increasingly squeezed by war time full employment pressures. His conclusion is provocative indeed and merits a longer look by comparative historians of slavery and serfdom, who have for decades debated the economic efficiency of violence and brutality. Mass murder, he argues, was essential to the emerging system of wartime German labor control: “Slavery was directly tied to the Holocaust and could not have functioned without the constant influx and ‘liquidation’ of prisoners” (p. 99).

The final case study is by Simon Reich. He considers the relation between the Nazi regime and foreign-owned corporations, in particular, the Ford Motor Co. He relates his experiences as an advisor to the American auto concern, when in response to several lawsuits filed in the 1990s, it sought to pin down its responsibility for actions of its German subsidiary after the Nazis came to power and, especially, during World War II. In the spirit of work he published before he went to work for Ford, Reich argues that the concern was no more than a secondary player in the German auto industry and was not perceived by the Nazis as a “German” company. As a consequence, before the war the regime actively discriminated against the company. Once the U.S. entered the war, he claims, Ford’s German executives ran the company in the interests of Germany, without direction from Detroit, so that Ford USA could not reasonably be held responsible for the sometimes appalling practices of its subsidiary.

The viewpoint of European Business, Dictatorship, and Political Risk is more panoramic. Consciously borrowing from the rhetoric of contemporary finance, it aims to explore how firms in the inter-war period dealt with “political risk.” The introduction, by the editors and Christopher Kopper, explains that the essays take as “their starting point the perceptions of business people about their political circumstances and the scope of business reaction to its changing and often hostile political environment” (p. x).

The extent to which the various essays actually do this varies widely. Myra Wilkins’s survey of the problems that faced firms in the interwar period says little about actual perceptions, but is of great interest and illustrated with a wealth of factual detail. Feldman contributes a well documented essay on collaboration between German and Italian insurers, while Hayes discusses Degussa’s successful attempt to develop homegrown, echt Deutsch carbon black, a priority for the Reich. Jana Wustenhagen outlines the travails and strategems of Schering in Argentina, while Martin Dean examines how various “multinational” Jewish firms succeeded and failed at transferring capital abroad in the face of German exchange controls.

Wifried Feldenkirchen’s essay on Siemens in Eastern Europe touches hardly at all on what that concern’s management thought it was doing or tried to do; instead it concentrates on the external details of the German giant’s business involvements in that tumultuous region. Lars Heide’s discussion of IBM is more ambitious. While it sniffs at previous studies of IBM, including Edwin Black’s IBM and the Holocaust, in fact it offers few specific criticisms of any of them. Once again, however, Heide is more impressed by what he claims was the predominant position of the Reich vis-a-vis the American giant. IBM, he argues, “was obliged to leave its business well managed by Germans and its machines that proved crucial in the management of German warfare under the sovereign control of the German government” (p. 173).

Kurt Jacobsen’s essay on how the Great Northern Telegraph Company negotiated with the Soviet Union and Japan and, more broadly, strove to stay ahead of governments intent on controlling strategic communications throws much light on an unheralded but important corner of international business history. Because of its close connection to the much discussed topic of “appeasement,” Neil Forbes’s analysis of why most British businesses sought to work with the Nazi government is one of the book’s most interesting essays. It benefits extensively from its author’s previous work in the area. Edward Kubu, Jiri Novotny, and Jiri Sousa collaborate on a chilling, well documented account of how German businesses and the Reich worked together to swallow most major multinational businesses in Czechoslovakia after 1938, while Luciano Segreto outlines the regulation of business under Fascism in Italy.

Now for the bad news. Business and Industry in Nazi Germany, contains a final essay by Volker Berghahn on “Writing the History of Business in the Third Reich: Past Achievements and Future Directions.” His well crafted piece revisits the much debated question of “the political responsibility of German businessmen under Nazism” (p. 129). His picture of the Nazis’ relation to big business differs radically from those in the other essays in these books: He suggests that a substantial number of big business leaders, in fact, enthusiastically backed the Nazis. While he continues to credit Henry Turner’s claims to have disproved what might termed the old “Nuremburg” theory of the Nazi seizure of power (p. 136), Berghahn bluntly declares that Hitler “obtained the support of both the officer corps and business soon after his seizure of power” (p. 142). More intriguingly, he goes on to propose that by “1935-1936″ not only were the Jewish business leaders gone, but that “an older generation of non-Jewish entrepreneurs and senior managers,” who might have been hostile to the regime, were headed into “retirement” or “inner immigration.” Their replacements were, in many cases, he suggests, men who “envisioned the future of the world in terms of blocs or empires” and whose “dynamism and energy, as Lutz Schwerin von Krosigk has put it, ‘degenerated into brutality and who would not be impressed by anything'” (p. 143). While he does wonder whether some wartime remarks by G?ring might have raised the blood pressure of this new generation of big business leaders, his discussion points to a major qualification of conventional wisdom.

A book review can hardly pursue the far-reaching implications of Berghahn’s proposals. About all one can say is that reopening the question of the “mentalities” (the debt to Annales is acknowledged) of German big business is sure to lead in interesting directions. Given the plain evidence of support from the whole German right, including big business circles, for the German government’s disastrous scheme for a customs union with Austria in 1931, it cannot be very long before someone notices that musings about “blocs and empires” long antedated the crisis Berghahn perceives in 1935-36.[1] It may be that Friedrich Meinecke’s old suggestion (revived by Dirk Stegmann) that “Pan-Germanism” constituted a critical historical link between pre-World War I and post-World War I German expansionism was right all along.[2] At the dawn of the twenty-first century, however, it important to note that studies of perceptions of “political risk” or how the Nazis related to big business can now incorporate less elusive forms of evidence. It is now widely recognized that stock prices incorporate a great deal of information about economics and politics. While the thin, restricted German stock markets of the later thirties may throw up barriers, it may be possible to check some of the claims advanced in these books with the methodology of event analysis. I am skeptical, for example, that most German bankers really felt themselves to be trudging through the Valley of the Shadow of Death under the Nazis. The Night of the Long Knives, after all, eliminated or cowed the noisiest parts of the Party’s “populist” wing. In addition, the regime duly reprivatized the banks that in 1931 had fallen like rotten apples into state hands, while prospects of profits from “Aryanization” (documented not least by James himself) were beguiling financiers. But James might be right — and if he is, it should probably show in the stock prices of banks relative to other sectors of big business, since his argument implies a large effect unique to finance, not business as a whole. Until the work is done, we will not know for sure, but readers of Berghahn’s essay will not, perhaps, be surprised to learn that the behavior of the German stock market during the Nazis seizure of power does not appear to be consistent with the Turner thesis that underpins key claims advanced by many essays in these two books.[3] Notes: 1. On the pressure for expansion in the 1931 crisis, see especially Thomas Ferguson and Peter Temin, “Made in Germany: The German Currency Crisis of July 1931,” in Research in Economic History 21 (2003): 1-53.

2. See the discussion in Dirk Stegmann, “Zum Verhaeltnis Von Grossindustrie und Nationalsozialismus, 1930-1933,” Archiv fuer Sozialgeschichte 13 (1973): 402-03.

3. Thomas Ferguson and Joachim Voth, “Betting on Hitler: The Value of Political Connections in Nazi Germany,” London: Center for Economic Policy Research, 2005, Discussion Paper 5021. A revised version is nearing completion.

Thomas Ferguson is Professor of Political Science at the University of Massachusetts, Boston. His publications include Golden Rule: The Investment Theory of Party Competition and the Logic of Money-Driven Political Systems (University of Chicago Press, 1995).

Subject(s):Economic Planning and Policy
Geographic Area(s):Europe
Time Period(s):20th Century: Pre WWII

International Financial History in the Twentieth Century: System and Anarchy

Author(s):Flandreau, Marc
Holtfrerich, Carl-Ludwig
James, Harold
Reviewer(s):Mason, Joseph R.

Published by EH.NET (March 2004)

?

Marc Flandreau, Carl-Ludwig Holtfrerich, and Harold James, editors, International Financial History in the Twentieth Century: System and Anarchy. Cambridge: Cambridge University Press, 2003. x + 278 pp. $60 (cloth), ISBN 0-521-81995-4.

Reviewed for EH.NET by Joseph R. Mason, Department of Finance, Drexel University.

This book, while not always fun to read, is, however, fascinating. In ten chapters, the essays lay out some of the important principles underlying path dependence between nineteenth and twentieth century international financial institutions. The essays are arranged in an order that demonstrates first the sophistication of late nineteenth century institutions and by the end the relative backwardness of what many consider to be sophisticated modern-day institutions. The essays in the book weave in and out of different academic disciplines, combining history, history of economic thought, and political economy in a way that offers a unique perspective of path dependence.

Flandreau and James sum up the book’s argument succinctly in the introduction. There, they assert:

… modern advocates of monetary reform are just the latest offspring of a long and venerable tradition dating back to the nineteenth century…. The past has lessons that are relatively cheap to learn, and, as we shall see, they are telling and compelling.

?

Put simply, these lessons are: (1) attempts at international coordination or control rarely work; (2) such attempts are most unstable when they are politicized as a result of unstable international politics; (3) the markets are themselves possible only on the basis of powerful institutional, political, and social forces (pp. 2-3).

Those are important and powerful lessons. I feel, however, that Flandreau and James reach a bit too far when they attempt to draw a globalization analogy out of the history of international monetary order. The authors characterize the interwar period as the bottom of a U-shaped trend of globalization. The present set of essays, however, seems to show that the extremes of the U-shape were lower than previously believed, i.e., that there was less coordination in the nineteenth century and today than was previously believed. Hence, it may be more appropriate to characterize the interwar period as one of transition toward a new hegemony or a new coordination mechanism rather than a low point in globalization per se.

Nonetheless the set of essays contained in the book is extraordinary. Chapter One, written by Marc Flandreau, describes how, under the previously perceived order of the classical gold standard, there existed significant sovereign risk that could detract from monetary order. Global financial centers grew immensely after 1840. This growth was due to better capabilities of screening borrowers and pricing risk, along with faster information flows arising from the growth of the telegraph, which was capped by the completion of cables between London and the Continent (1852) and Europe and America (1866).

Evidence for substantial sovereign risk in the era comes from records of the Service des Etudes Financi?res (SEF) as part of Cr?dit Lyonnais in 1871. The SEF was initially established to organize the vast amounts of information and data into a reference unit accessible to those making credit decisions for the bank. While from its inception in 1871 through about 1879 the SEF was underfunded and understaffed, and as a result only marginally effective at this task, by 1889 the SEF had achieved notoriety as the premier think tank for research into credit risk and international affairs (which at the time were highly correlated).

During its heyday, the SEF produced country ratings that relied critically upon adjusting sovereigns’ own reports of fiscal health for various known accounting irregularities and fudges, and for probabilities of sovereign risk of default. Hence, Flandreau effectively demonstrates that gold standard discipline was not absolute, nor did contemporary investors believe that was the case. Furthermore, disciplined investors routinely estimated default risk during the gold standard era in ways that are strikingly similar to modern rating mechanisms.

Still, there was ample room for investment outside the government sector during the gold standard era. That is the subject of Chapter Two, written by Mira Wilkins, regarding foreign direct investment between 1880 and 1914. Wilkins has poured a phenomenal amount of work into first appropriately defining, and then setting about to estimate a concept akin to what we now call foreign direct investment. A great deal of difficulty stems from not only utilizing vastly different corporate forms in the gold standard era, but also, of course, in the paucity of data from that era. At the end of the chapter Wilkins offers seven conclusions, the seventh of which I found the most intriguing for the purpose of the book: although “… the gold standard reduced the risks of losses based on currency fluctuations; it did not reduce commercial or political risks.” Hence, without nationalized industry business faced substantial risk even in the face of gold standard discipline.

Transitional essays begin with Stephen Shuker’s chapter on the Gold-Exchange Standard. Shuker’s essay offers intriguing insights into the politics of the interwar era and illustrates how resistance to the costs of moving the monetary center from Britain to America resulted in trade blocs that would later define the boundaries of World War II. This essay also points out the many problems inherent in building monetary order on the basis of “conference diplomacy.” Hence the primary difference in stability across the gold and gold standard eras was not one of inherent discipline, as countries routinely broke the “rules of the game” in both eras, but that the pressures left by WWI had already changed the rules in ways that economists at the time may not have recognized.

Kenneth Mour? continues the interwar theme in his chapter, which extends his book, Managing the Franc Poincar? (1991), back in time prior to 1928. Moure describes how, during the period 1914-1928, French politicians locked themselves into a policy of restoring and maintaining the franc’s link to gold.

Beginning as early as 1915, France urged its citizens to exchange gold for paper bank notes “without losing any part of their savings, without running any risk, without having to pay more for anything they wish to buy” (p. 97). The government even mobilized the Catholic Church, appealing to Christian principles, to encourage the exchange. By the end of WWI and lasting until 1928, however, France was in no position to exchange the paper back into gold as promised. For almost fourteen years, then, French citizens were told that France would reestablish her link to gold. Hence, France had little choice but to remain tied to gold once conversion was complete, even in the face of the Great Depression that gripped the world a short while later.

Robert Skidelsky’s chapter begins two essays devoted to the Bretton Woods era. Skidelsky skillfully describes how many of Keynes’ most important contributions owed to his experiences as a British citizen during and after the Great Depression. Hence, Keynes usually wrote from a vantage point of reestablishing Britain’s hegemony. Because the U.S. did not seem to want the role, Keynes was often of the opinion that the U.S. should contribute significant sums to attain this goal. A reluctant U.S., however, was not forthcoming until a shared threat, in the form of the Cold War, motivated it to take a leading role in the New World Order.

Jakob Tanner, on the other hand, describes how difficult it was for the neutral countries, including Sweden, Switzerland, Portugal, Spain, and Turkey, to play any part in helping shape that New World Order. Since the neutrals did not help win the war, and in fact may have in some ways interfered with victory, they were not looked kindly upon in the immediate post-war era. Hence, these countries were not invited to Bretton Woods, and could only join the arrangements in 1946. Those countries, however, being small and relying substantially upon foreign exports for economic growth, were acutely affected by the outcome at Bretton Woods.

The next two chapters deal with issues related to German reconstruction. The chapter by Charles Kindleberger and Taylor Ostrander analyzes the roots of the 1948 German monetary reform. This essay is fascinating on a number of different levels. First it shows just how costly and difficult managing a defeated nation’s postwar economy can be. Myriad resources were devoted to keeping order, planning succession governments, and establishing new monetary arrangements, all the while fighting inflationary pressures and, toward the end of the period, the Cold War. Second, and central to the essay provided here, the essay shows just how difficult it is to reestablish a currency rate that balances inflation, trade, and growth in a volatile country under military occupation. The main point is that the occupying forces often had to take steps that were impossible for an infant government to impose without losing credibility to an extent that insurrection or even war might ensue.

The chapter by Werner Abelshauser builds upon Kindleberger and Ostrander, pointing out that World-class Wars lead to world-class financial commitments long after the battlefields are quiet. Abelshauser demonstrates how military expenditures play a key role in international financial relations. These expenditures began with costs of the German occupation of around one billion dollars per year lasting into the 1960s (chiefly borne by the U.S. and U.K.). By 1960, the costs of the Cold War, Korea, and NATO nuclear armaments forced the U.S. and U.K. out of Germany and into exorbitant spending programs devoted to high-tech weaponry. Those weapons programs led to diplomatic agreements (and disagreements) about how to spread the financial burden of defense and to broader monetary coordination.

Eric Helleiner’s chapter on global money poses the question of whether the world is moving toward or away from a global currency. Some have suggested that the gold standard represented a means by which regional currencies were aggregated to national currencies. National currencies based on the gold standard were thought to be uniform, leading toward a global monetary standard. However, earlier essays revealed that the “rules of the game” in the gold standard era were often violated. Furthermore, once nations took control of their moneys they quickly used coins and currencies as nationalist tools, reflecting national icons, traditions, and pride. Although such movements may be taken as barriers to a global money standard, today a number of nations operate on the basis of substantial dollar-denominated trade. Hence, the market may in fact be driving the world to a de facto monetary standard without central coordination. If that is indeed the case, Friedrich Hayek would be pleased.

The last chapter, by Louis Pauly, characterizes twentieth century international relations not by the absence of gold standard “rules of the game,” (which many have argued were often absent in the nineteenth century anyway) but by the presence of a new means of coordinating monetary arrangements: “conference diplomacy.” Beginning with the League of Nations in the early 1920s, diplomats convened large assemblies of “the right sort of people” who could think intelligently about world economic difficulties and settle on arrangements to ameliorate those difficulties.

These early conferences are the basis of the institutions we know today, GATT, the IMF, the World Bank, etc. But while the minds have changed, the important issues of the day are remarkably similar to those considered in the earliest conventions. Pauly attributes this constancy to the philosophical nature of the debate. It is not economic principles that are being debated, but principles of “contestable markets, efficiency, and fairness” (pp. 254-5). In fact, Pauly notes that even the conclusions of the 1997 WTO meeting of the world’s leading trade ministers — we hereby create a working group, “to study issues raised by Members relating to the interactions between trade and competition policy, including anti-competitive practices, in order to identify any areas that may merit further consideration,” — are uncomfortably similar to the final goals of the Geneva Conference of 1927. Hence Pauly ends his essay with the perhaps disturbing insight:

We do not need to rediscover as the League of Nations did that the important question concerning the universal evolution of deep structural standards is not “efficient and fair for what,” but “efficient and fair for whom.” Symmetry in the distribution of the adjustment burdens associated with global economic interdependence was a key principle of the Bretton Woods system, albeit one honored mainly in the breach. In the post-Bretton Woods environment, it remained a normative ideal. It would seem wise to bring that principle back to center stage before accelerating the movement to articulate and enforce international standards of industrial organization and business practice (pp. 262-3).

?

Of course, much of the detail on monetary arrangements found in this book is described in Barry Eichengreen’s Globalizing Capital (1996), but that type of detail is not the main contribution. In my opinion, the main contribution is in drawing analogies in history and politics that can contribute perspective on the institutions of the past and help guide decisions in the future. In that regard, I think nearly every essay in the collection has succeeded.

Joseph Mason is the author of numerous articles including, “Do Lender of Last Resort Policies Matter? The Effects of Reconstruction Finance Corporation Assistance to Banks during the Great Depression,” Journal of Financial Services Research, August 2001, pp. 77-95. Find out more at eh.net/Clio/index-MasonResearch.html.

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):General, International, or Comparative
Time Period(s):20th Century: WWII and post-WWII

To Wire the World: Perry M. Collins and the North Pacific Telegraph Expedition

Author(s):Dwyer, John B.
Reviewer(s):Nonnenmacher, Tomas

Published by EH.NET (January 2002)

John B Dwyer. To Wire the World: Perry M. Collins and the North Pacific

Telegraph Expedition. Westport, CT: Praeger, 2001. xv + 183 pp. $68

(cloth), ISBN 0-275-96755-7.

Reviewed for EH.NET by Tomas Nonnenmacher, Department of Economics, Allegheny

College.

This book is an account of the construction of Western Union’s failed

Russian-American telegraph (RAT). The RAT was Western Union’s attempt to

capture the very lucrative transatlantic telegraph business. Beginning in

1857, Cyrus Field made several attempts at connecting North America with

Europe using a submarine telegraph, but he was unsuccessful until 1866. Prior

to that success, Western Union executives had anticipated the possibility that

a transatlantic cable would never succeed, making the daunting and expensive

RAT worth an attempt.

Perry Collins, who traveled across Russia in the mid-1850s to gauge the trade

possibilities along the Amur River, conceived of the idea of the RAT. Collins

was enamored with the potential of the Amur and inspired by the writings of

D.I. Romanoff, a Russian telegrapher who was designated to construct a

telegraph line along the Amur. Collins’ plans for trade grew to encompass an

American telegraph connection with Europe through Russia. The proposed line

connected Sacramento with New Westminster, went up the Fraser River, crossed

to the Yukon River, traveled down its length to St. Michael, crossed the

Bering Strait, and ultimately linked the coastal cities of Gizhiga, Okhotsk,

and Nicolayevsk. From there, it would connect with a 7,000 mile long Russian

line from St. Petersburg. Although much longer than the transatlantic route,

the RAT would only use submarine telegraphy over short distances, a true cost

savings.

Dwyer, a military historian, builds his story largely from the diaries,

reports, and memoirs of men engaged in building the various sections of the

line. It is a story about individuals rather than a study about the RAT’s

place in the broader history of either the telegraph or the development of a

worldwide communication network. Little is said about the technology used, nor

is much analysis about the executive decision-making at Western Union offered.

A more succinct account of the RAT is available that places the venture in

its broader history, but Dwyer offers a “who did what and when” narrative of

the undertaking that is unobtainable elsewhere. [1] The book’s nine chapters

are accompanied by six photos, seven drawings, and four maps. More detail in

the maps would have been helpful to a reader unfamiliar with the geography of

Alaska and eastern Siberia. I recommend having an atlas handy.

The first chapter covers Perry Collins’ early forays into Siberia, his

courting of the federal government for aid, the agreements made with the

Russians and British for rights of way, the support of Western Union, and the

early planning of the project. Chapters 2 through 4 are histories of the U.S.

military corps, Western Union’s telegraph “army,” and Western Union’s

telegraph “navy.” Each chapter includes background information on the

individuals engaged in the venture, for example, Charles Bulkley, a former

superintendent of military telegraphs, was Engineer in Chief of the RAT and

Captain Charles Scammon, a Revenue Marine officer, was the Chief of Marine

for the expedition. Dwyer also offers a brief history of some of the ships

used, such as the Nightingale, the flagship of the RAT that had previously

transported gold, tea, and slaves.

The next four chapters each cover one section of the telegraph line: British

Columbia, the Bering Strait, Russian America, and Siberia. The line was

ultimately completed through Quesnel, British Columbia and then north to the

Skeena River. The other sections of the lines were partially surveyed and

partially constructed but were never integrated into the larger telegraph

network. Dwyer tells us, for instance, that in Siberia, the crew had “surveyed

the entire 1,500-mile-line route from the Amur River to the Bering Strait,

prepared 15,000 telegraph poles, cut fifteen miles of road, and built fifty

station houses” (p. 151).

Western Union spent $3 million on the RAT, of which only a small portion would

ever be recouped. What were the ultimate payoffs of the venture? The RAT

focused the attention of the federal government, especially Secretary of State

Seward, on Russian America. Generated from this interest was perhaps the most

lasting effect of the project: the purchase of Alaska for $7.2 million by the

United States in October 1867. The project also led to the surveying and

opening of portions of Alaska and British Columbia. Ultimately, Western Union

was quick to abandon the RAT once it was clear that the Atlantic cable would

work. Whether the undertaking could have ever been technologically or

financially successful had Field’s transatlantic line not worked remains an

unanswered question.

[1] See Chapter 29 of Robert Luther Thompson, Wiring a Continent: The

History of the Telegraph Industry in the United States, 1832-1866.

Princeton: Princeton University Press, 1947.

Tomas Nonnenmacher is Assistant Professor of Economics at Allegheny College,

Meadville, PA. He is the author of “State Promotion and Regulation of the

Telegraph Industry, 1845-1860″ (Journal of Economic History, March 2001).

Subject(s):Transport and Distribution, Energy, and Other Services
Geographic Area(s):North America
Time Period(s):19th Century

The People’s Network: The Political Economy of the Telephone in the Gilded Age

Author(s): MacDougall, Robert
Reviewer(s):Hochfelder, David

Published by EH.Net (July 2014)

Robert MacDougall, The People’s Network: The Political Economy of the Telephone in the Gilded Age. Philadelphia: University of Pennsylvania Press, 2014. v + 332 pp. $55 (hardcover), ISBN: 978-0-8122-4569-1.

Reviewed for EH.Net by David Hochfelder, Department of History, University of Albany, SUNY.

In 1908, G.W.H. Kemper, a prominent resident of Muncie, Indiana, had two telephones. One telephone connected him to the local Bell affiliate, which in turn connected him to a huge network encompassing four million telephones east of the Rocky Mountains. The other telephone, leased from a so-called “Independent” company, connected him to only about 1,500 telephones in and around Muncie. Robert MacDougall wants to understand what this turn-of-the-century competition between the two systems meant for U.S. and Canadian history. Doing so allows MacDougall to explore a key insight knitting together business history and the history of technology — that “the most important fact about electrical communication” in the nineteenth and early twentieth centuries was not “the separation of communication and transportation, but the marriage of communication to capital” (p. 62).

This is an excellent book for several reasons. As the title suggests, it is foremost about the political economy of the telephone. As such, it is a model of how to write the history of a technology, particularly a technology as it matures and coalesces around competing visions and organizational structures. Robert MacDougall’s book weaves together corporate strategy, regulation (from municipal to federal levels of the state), the issue of local versus central control, and the scope and influence of consumers’ choices. The heart of MacDougall’s story is the battle between the Bell System and the Independents in the United States and Canada. This battle took place between the expiration of Alexander Graham Bell’s key telephony patents in the mid-1890s and about 1920 when the Bell System accepted state and federal regulation in the U.S. in exchange for a de facto monopoly of the nation’s telephone network.

At one level, this is a story about industrial competition. At a deeper level, it reveals competing visions of an important technology, the social role that it ought to play. MacDougall shows that the Bell System and the Independents envisioned the telephone in far different ways. Bell, especially under Theodore Vail, president of AT&T between 1907 and 1919, sought to build a unified telecommunications network that spanned the United States. Bell Canada espoused a different vision, that the telephone ought to remain an expensive urban medium primarily used for business purposes. Both Bell systems shared the ideology that the telephone industry ought to be controlled by centralized, national corporations. On the other hand, the Independents described the Bell System as a grasping octopus that wanted a stranglehold over the nation’s communications. The Independents offered instead a vision of the telephone as a people’s network that enhanced local ties and preserved community autonomy. In the United States, MacDougall claims that the Independents’ vision for the telephone “descended from a civic understanding of communication that went back to the American Revolution,” that “free and open communications were a basic ingredient of democracy” (p. 5). On a more mundane level, the Independents encouraged social uses of the telephone — like gossiping and banjo-playing — that the Bell System actively discouraged at the time.

This book is also a comparative history of the U.S. and Canada. MacDougall (Associate Professor of History and Associate Director of the Centre for American Studies at Western University in London, Ontario) is in an excellent position to write such a history. MacDougall focuses on Muncie, Indiana, and Kingston, Ontario — towns that were very similar at the turn of the twentieth century. In Canada, Bell’s fundamental patents were overturned in 1885, about a decade before they expired in the U.S. Thus, the opportunity existed in both countries to set up competing telephone companies. However, national differences were important. In the United States, the Bell System promoted a vision of universal service across the continent. Bell Canada, on the other hand, contented itself with serving primarily urban businessmen. It even neglected Francophone Canada. Another significant difference was the level of government at which regulation occurred. In the U.S., city governments and state public service commissions took the lead in regulating rates, terms of service, and franchise agreements. In Canada, cities had little regulatory oversight over the telephone, leaving it largely up to the federal government.

Finally, this book is a shrewd analysis of how history gets produced. MacDougall notes that as a technology like the telephone becomes commonplace, the history of the choices made by managers and consumers “has a curious way of disappearing from our memories.” As a technology “becomes more familiar, it recedes from our attention” (pp. 3-4). Another factor was at work in obscuring the history of telephony — AT&T’s active shaping of that history. Paraphrasing Winston Churchill, MacDougall stresses that if history “has been kind to AT&T,” it is “because AT&T wrote it” (p. 3). AT&T’s shaping of its own history had two effects. It made AT&T’s victory over the Independents seem inevitable and beneficial. Driven by the logic of economies of scale and engineering efficiency, AT&T’s vision of universal service seemed to be a natural and progressive evolution of modern industry. More deeply, AT&T’s history-making was vital “to legitimize the new nation-spanning corporation” and to convince Americans of “the desirability and inevitability of national integration through commerce” (p. 17).

Thanks to AT&T’s shaping of its own history, “it is difficult for us to imagine alternatives” (p. 258) to the trajectory of the telephone industry in both countries. Yet we must acknowledge that alternatives existed. The ultimate victory of the Bell System was not due to inherent technological attributes of the telephone or abstract economic forces, but was politically and socially constructed. Today, MacDougall concludes, we stand at a similar crossroads. Like our forebears a century ago, we “have the opportunity and responsibility” to shape “the future of our communications networks.” As American telecommunication industry lobbyists and federal regulators grapple with the issue of net neutrality, MacDougall’s book shows that “the ways we communicate with one another are, or should be, at the very center of political debate” (p. 267)

David Hochfelder is the author of The Telegraph in America: 1832–1920 (2012) and is presently working on a book on the history of thrift in America. He is also part of a team building a website to reconstruct the 100-acre neighborhood of Albany, NY demolished to build the Empire State Plaza capitol complex.

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (July 2014). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Business History
Geographic Area(s):North America
Time Period(s):19th Century
20th Century: Pre WWII