EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Economic History of the International Film Industry

Gerben Bakker, University of Essex

Introduction

Like other major innovations such as the automobile, electricity, chemicals and the airplane, cinema emerged in most Western countries at the same time. As the first form of industrialized mass-entertainment, it was all-pervasive. From the 1910s onwards, each year billions of cinema-tickets were sold and consumers who did not regularly visit the cinema became a minority. In Italy, today hardly significant in international entertainment, the film industry was the fourth-largest export industry before the First World War. In the depression-struck U.S., film was the tenth most profitable industry, and in 1930s France it was the fastest-growing industry, followed by paper and electricity, while in Britain the number of cinema-tickets sold rose to almost one billion a year (Bakker 2001b). Despite this economic significance, despite its rapid emergence and growth, despite its pronounced effect on the everyday life of consumers, and despite its importance as an early case of the industrialization of services, the economic history of the film industry has hardly been examined.

This article will limit itself exclusively to the economic development of the industry. It will discuss just a few countries, mainly the U.S., Britain and France, and then exclusively to investigate the economic issues it addresses, not to give complete histories of the industries in those countries. This entry cannot do justice to developments in each and every country, given the nature of an encyclopedia article. This entry also limits itself to the evolution of the Western film industry, because it has been and still is the largest film industry in the world, in revenue terms, although this may well change in the future.

Before Cinema

In the late eighteenth century most consumers enjoyed their entertainment in an informal, haphazard and often non-commercial way. When making a trip they could suddenly meet a roadside entertainer, and their villages were often visited by traveling showmen, clowns and troubadours. Seasonal fairs attracted a large variety of musicians, magicians, dancers, fortune-tellers and sword-swallowers. Only a few large cities harbored legitimate theaters, strictly regulated by the local and national rulers. This world was torn apart in two stages.

First, most Western countries started to deregulate their entertainment industries, enabling many more entrepreneurs to enter the business and make far larger investments, for example in circuits of fixed stone theaters. The U.S. was the first with liberalization in the late eighteenth century. Most European countries followed during the nineteenth century. Britain, for example, deregulated in the mid-1840s, and France in the late 1860s. The result of this was that commercial, formalized and standardized live entertainment emerged that destroyed a fair part of traditional entertainment. The combined effect of liberalization, innovation and changes in business organization, made the industry grow rapidly throughout the nineteenth century, and integrated local and regional entertainment markets into national ones. By the end of the nineteenth century, integrated national entertainment industries and markets maximized productivity attainable through process innovations. Creative inputs, for example, circulated swiftly along the venues – often in dedicated trains – coordinated by centralized booking offices, maximizing capital and labor utilization.

At the end of the nineteenth century, in the era of the second industrial revolution, falling working hours, rising disposable income, increasing urbanization, rapidly expanding transport networks and strong population growth resulted in a sharp rise in the demand for entertainment. The effect of this boom was further rapid growth of live entertainment through process innovations. At the turn of the century, the production possibilities of the existing industry configuration were fully realized and further innovation within the existing live-entertainment industry could only increase productivity incrementally.

At this moment, in a second stage, cinema emerged and in its turn destroyed this world, by industrializing it into the modern world of automated, standardized, tradable mass-entertainment, integrating the national entertainment markets into an international one.

Technological Origins

In the early 1890s, Thomas Edison introduced the kinematograph, which enabled the shooting of films and their play-back in slot-coin machines for individual viewing. In the mid-1890s, the Lumière brothers added projection to the invention and started to play films in theater-like settings. Cinema reconfigured different technologies that all were available from the late 1880s onwards: photography (1830s), taking negative pictures and printing positives (1880s), roll films (1850s), celluloid (1868), high-sensitivity photographic emulsion (late 1880s), projection (1645) and movement dissection/ persistence of vision (1872).

After the preconditions for motion pictures had been established, cinema technology itself was invented. Already in 1860/1861 patents were filed for viewing and projecting motion pictures, but not for the taking of pictures. The scientist Jean Marey completed the first working model of a film camera in 1888 in Paris. Edison visited Georges Demeney in 1888 and saw his films. In 1891, he filed an American patent for a film camera, which had a different moving mechanism than the Marey camera. In 1890, the Englishman Friese Green presented a working camera to a group of enthusiasts. In 1893 the Frenchman Demeney filed a patent for a camera. Finally, the Lumière brothers filed a patent for their type of camera and for projection in February 1895. In December of that year they gave the first projection for a paying audience. They were followed in February 1896 by the Englishman Robert W. Paul. Paul also invented the ‘Maltese cross,’ a device which is still used in film cameras today. It is instrumental in the smooth rolling of the film, and in the correcting of the lens for the space between the exposures (Michaelis 1958; Musser 1990: 65-67; Low and Manvell 1948).

Three characteristics stand out in this innovation process. First, it was an international process of invention, taking place in several countries at the same time, and the inventors building upon and improving upon each other’s inventions. This connects to Joel Mokyr’s notion that in the nineteenth century communication became increasingly important to innovations, and many innovations depended on international communication between inventors (Mokyr 1990: 123-124). Second, it was what Mokyr calls a typical nineteenth century invention, in that it was a smart combination of many existing technologies. Many different innovations in the technologies which it combined had been necessary to make possible the innovation of cinema. Third, cinema was a major innovation in the sense that it was quickly and universally adopted throughout the western world, quicker than the steam engine, the railroad or the steamship.

The Emergence of Cinema

For about the first ten years of its existence, cinema in the United States and elsewhere was mainly a trick and a gadget. Before 1896 the coin-operated Kinematograph of Edison was present at fairs and in entertainment venues. Spectators had to throw a coin in the machine and peek through glasses to see the film. The first projections, from 1896 onwards, attracted large audiences. Lumière had a group of operators who traveled around the world with the cinematograph, and showed the pictures in theaters. After a few years films became a part of the program in vaudeville and sometimes in theater as well. At the same time traveling cinema emerged: cinemas which traveled around with a tent or mobile theater and set up shop for a short time in towns and villages. These differed from the Lumière operators and others in that they catered for the general, popular audiences, while the former were more upscale parts of theater programs, or a special program for the bourgeoisie (Musser 1990: 140, 299, 417-20).

This whole era, which in the U.S. lasted up to about 1905, was a time in which cinema seemed just one of many new fashions, and it was not at all certain that it would persist, or that it would be forgotten or marginalized quickly, such as happened to the boom in skating rinks and bowling alleys at the time. This changed when Nickelodeons, fixed cinemas with a few hundred seats, emerged and quickly spread all over the country between 1905 and 1907. From this time onwards cinema changed into an industry in its own right, which was distinct from other entertainments, since it had its own buildings and its own advertising. The emergence of fixed cinemas coincided which a huge growth phase in the business in general; film production increased greatly, and film distribution developed into a special activity, often managed by large film producers. However, until about 1914, besides the cinemas, films also continued to be combined with live entertainment in vaudeville and other theaters (Musser 1990; Allen 1980).

Figure 1 shows the total length of negatives released on the U.S., British and French film markets. In the U.S., the total released negative length increased from 38,000 feet in 1897, to two million feet in 1910, to twenty million feet in 1920. Clearly, the initial U.S. growth between 1893 and 1898 was very strong: the market increased by over three orders of magnitude, but from an infinitesimal initial base. Between 1898 and 1906, far less growth took place, and in this period it may well have looked like the cinematograph would remain a niche product, a gimmick shown at fairs and used to be interspersed with live entertainment. From 1907, however, a new, sharp sustained growth phase starts: The market increased further again by two orders of magnitude – and from a far higher base this time. At the same time, the average film length increased considerably, from eighty feet in 1897 to seven hundred feet in 1910 to three thousand feet in 1920. One reel of film held about 1,500 feet and had a playing time of about fifteen minutes.

Between the mid-1900s and 1914 the British and French markets were growing at roughly the same rates as the U.S. one. World War I constituted a discontinuity: from 1914 onwards European growth rates are far lower those in the U.S.

The prices the Nickelodeons charged were between five and ten cents, for which spectators could stay as long as they liked. Around 1910, when larger cinemas emerged in hot city center locations, more closely resembling theaters than the small and shabby Nickelodeons, prices increased. They varied from between one dollar to one dollar and-a-half for ‘first run’ cinemas to five cents for sixth-run neighborhood cinemas (see also Sedgwick 1998).

Figure 1

Total Released Length on the U.S., British and French Film Markets (in Meters), 1893-1922

Note: The length refers to the total length of original negatives that were released commercially.

See Bakker 2005, appendix I for the method of estimation and for a discussion of the sources.

Source: Bakker 2001b; American Film Institute Catalogue, 1893-1910; Motion Picture World, 1907-1920.

The Quality Race

Once Nickelodeons and other types of cinemas were established, the industry entered a new stage with the emergence of the feature film. Before 1915, cinemagoers saw a succession of many different films, each between one and fifteen minutes, of varying genres such as cartoons, newsreels, comedies, travelogues, sports films, ‘gymnastics’ pictures and dramas. After the mid-1910s, going to the cinema meant watching a feature film, a heavily promoted dramatic film with a length that came closer to that of a theater play, based on a famous story and featuring famous stars. Shorts remained only as side dishes.

The feature film emerged when cinema owners discovered that films with a far higher quality and length, enabled them to ask far higher ticket prices and get far more people into their cinemas, resulting in far higher profits, even if cinemas needed to pay far more for the film rental. The discovery that consumers would turn their back on packages of shorts (newsreels, sports, cartoons and the likes) as the quality of features increased set in motion a quality race between film producers (Bakker 2005). They all started investing heavily in portfolios of feature films, spending large sums on well-known stars, rights to famous novels and theater plays, extravagant sets, star directors, etc. A contributing factor in the U.S. was the demise of the Motion Picture Patents Company (MPPC), a cartel that tried to monopolize film production and distribution. Between about 1908 and 1912 the Edison-backed MPPC had restricted quality artificially by setting limits on film length and film rental prices. When William Fox and the Department of Justice started legal action in 1912, the power of the MPPC quickly waned and the ‘independents’ came to dominate the industry.

In the U.S., the motion picture industry became the internet of the 1910s. When companies put the word motion pictures in their IPO investors would flock to it. Many of these companies went bankrupt, were dissolved or were taken over. A few survived and became the Hollywood studios most of which we still know today: Paramount, Metro-Goldwyn-Mayer (MGM), Warner Brothers, Universal, Radio-Keith-Orpheum (RKO), Twentieth Century-Fox, Columbia and United Artists.

A necessary condition for the quality race was some form of vertical integration. In the early film industry, films were sold. This meant that the cinema-owner who bought a film, would receive all the marginal revenues the film generated. In the film industry, these revenues were largely marginal profits, as most costs were fixed, so an additional film ticket sold was pure (gross) profit. Because the producer did not get any of these revenues, at the margin there was little incentive to increase quality. When outright sales made way for the rental of films to cinemas for a fixed fee, producers got a higher incentive to increase a film’s quality, because it would generate more rentals (Bakker 2005). This further increased when percentage contracts were introduced for large city center cinemas, and when producers-distributors actually started to buy large cinemas. The changing contractual relationship between cinemas and producers was paralleled between producers and distributors.

The Decline and Fall of the European Film Industry

Because the quality race happened when Europe was at war, European companies could not participate in the escalation of quality (and production costs) discussed above. This does not mean all of them were in crisis. Many made high profits during the war from newsreels, other short films, propaganda films and distribution. They also were able to participate in the shift towards the feature film, substantially increasing output in the new genre during the war (Figure 2). However, it was difficult for them to secure the massive amount of venture capital necessary to participate in the quality race while their countries were at war. Even if they would have managed it may have been difficult to justify these lavish expenditures when people were dying in the trenches.

Yet a few European companies did participate in the escalation phase. The Danish Nordisk company invested heavily in long feature-type films, and bought cinema chains and distributors in Germany, Austria and Switzerland. Its strategy ended when the German government forced it to sell its German assets to the newly founded UFA company, in return for a 33 percent minority stake. The French Pathé company was one of the largest U.S. film producers. It set up its own U.S. distribution network and invested in heavily advertised serials (films in weekly installments) expecting that this would become the industry standard. As it turned out, Pathé bet on the wrong horse and was overtaken by competitors riding high on the feature film. Yet it eventually switched to features and remained a significant company. In the early 1920s, its U.S. assets were sold to Merrill Lynch and eventually became part of RKO.

Figure 2

Number of Feature Films Produced in Britain, France and the U.S., 1911-1925

(semi-logarithmic scale)

Source: Bakker 2005 [American Film Institute Catalogue; British Film Institute; Screen Digest; Globe, World Film Index, Chirat, Longue métrage.]

Because it could not participate in the quality race, the European film industry started to decline in relative terms. Its market share at home and abroad diminished substantially (Figure 3). In the 1900s European companies supplied at least half of the films shown in the U.S. In the early 1910s this dropped to about twenty percent. In the mid-1910s, when the feature film emerged, the European market share declined to nearly undetectable levels.

By the 1920s, most large European companies gave up film production altogether. Pathé and Gaumont sold their U.S. and international business, left film making and focused on distribution in France. Éclair, their major competitor, went bankrupt. Nordisk continued as an insignificant Danish film company, and eventually collapsed into receivership. The eleven largest Italian film producers formed a trust, which terribly failed and one by one they fell into financial disaster. The famous British producer, Cecil Hepworth, went bankrupt. By late 1924, hardly any films were being made in Britain. American films were shown everywhere.

Figure 3

Market Shares by National Film Industries, U.S., Britain, France, 1893-1930

Note: EU/US is the share of European companies on the U.S. market, EU/UK is the share of European companies on the British market, and so on. For further details see Bakker 2005.

The Rise of Hollywood

Once they had lost out, it was difficult for European companies to catch up. First of all, since the sharply rising film production costs were fixed and sunk, market size was becoming of essential importance as it affected the amount of money that could be spent on a film. Exactly at this crucial moment, the European film market disintegrated, first because of war, later because of protectionism. The market size was further diminished by heavy taxes on cinema tickets that sharply increased the price of cinema compared to live entertainment.

Second, the emerging Hollywood studios benefited from first mover advantages in feature film production: they owned international distribution networks, they could offer cinemas large portfolios of films at a discount (block-booking), sometimes before they were even made (blind-bidding), the quality gap with European features was so large it would be difficult to close in one go, and, finally, the American origin of the feature films in the 1910s had established U.S. films as a kind of brand, leaving consumers with high switching costs to try out films from other national origins. It would be extremely costly for European companies to re-enter international distribution, produce large portfolios, jump-start film quality, and establish a new brand of films – all at the same time (Bakker 2005).

A third factor was the rise of Hollywood as production location. The large existing American Northeast coast film industry and the newly emerging film industry in Florida declined as U.S. film companies started to locate in Southern California. First of all, the ‘sharing’ of inputs facilitated knowledge spillovers and allowed higher returns. The studios lowered costs because creative inputs had less down-time, needed to travel less, could participate in many try-outs to achieve optimal casting and could be rented out easily to competitors when not immediately wanted. Hollywood also attracted new creative inputs through non-monetary means: even more than money creative inputs wanted to maximize fame and professional recognition. For an actress, an offer to work with the world’s best directors, costume designers, lighting specialists and make-up artists was difficult to decline.

Second, a thick market for specialized supply and demand existed. Companies could easily rent out excess studio capacity (for example, during the nighttime B-films were made), and a producer was quite likely to find the highly specific products or services needed somewhere in Hollywood (Christopherson and Storper 1987, 1989). While a European industrial ‘film’ district may have been competitive and even have a lower over-all cost/quality ratio than Hollywood, a first European major would have a substantially higher cost/quality ratio (lacking external economies) and would therefore not easily enter (see, for example, Krugman and Obstfeld 2003, chapter 6). If entry did happen, the Hollywood studios could and would buy successful creative inputs away, since they could realize higher returns on these inputs, which resulted in American films with even a higher perceived quality, thus perpetuating the situation.

Sunlight, climate and the variety of landscape in California were of course favorable to film production, but were not unique. Locations such as Florida, Italy, Spain and Southern France offered similar conditions.

The Coming of Sound

In 1927, sound films were introduced. The main innovator was Warner Brothers, backed by the bank Goldman, Sachs, which actually parachuted a vice-president to Warner. Although many other sound systems had been tried and marketed from the 1900s onwards, the electrical microphone, invented at Bell labs in the mid-1920s, sharply increased the quality of sound films and made possible the change of the industry. Sound increased the interests in the film industry of large industrial companies such as General Electric, Western Electric and RCA, as well as those of the banks who were eager the finance the new innovation, such as the Bank of America and Goldman, Sachs.

In economic terms, sound represented an exogenous jump in sunk costs (and product quality) which did not affect the basic industry structure very much: The industry structure was already highly concentrated before sound and the European, New York/Jersey and Florida film industries were already shattered. What it did do was industrialize away most of the musicians and entertainers that had complemented the silent films with sound and entertainment, especially those working in the smaller cinemas. This led to massive unemployment among musicians (see, for example, Gomery 1975; Kraft 1996).

The effect of sound film in Europe was to increase the domestic revenues of European films, because they became more culture-specific as they were in the local language, but at the same time it decreased the foreign revenues European films received (Bakker 2004b). It is difficult to completely assess the impact of sound film, as it coincided with increased protection; many European countries set quotas for the amount of foreign films that could be shown shortly before the coming of sound. In France, for example, where sound became widely adopted from 1930 onwards, the U.S. share of films dropped from eighty to fifty percent between 1926 and 1929, mainly the result of protectionist legislation. During the 1930s, the share temporarily declined to about forty percent, and then hovered to between fifty and sixty percent. In short, protectionism decreased the U.S. market share and increased the French market shares of French and other European films, while sound film increased French market share, mostly at the expense of other European films and less so at the expense of U.S. films.

In Britain, the share of releases of American films declined from eighty percent in 1927 to seventy percent in 1930, while British films increased from five percent to twenty percent, exactly in line with the requirements of the 1927 quota act. After 1930, the American share remained roughly stable. This suggests that sound film did not have a large influence, and that the share of U.S. films was mainly brought down by the introduction of the Cinematograph Films Act in 1927, which set quotas for British films. Nevertheless, revenue data, which are unfortunately lacking, would be needed to give a definitive answer, as little is known about effects on the revenue per film.

The Economics of the Interwar Film Trade

Because film production costs were mainly fixed and sunk, international sales or distribution were important, because these were additional sales without much additional cost to the producer; the film itself had already been made. Films had special characteristics that necessitated international sales. Because they essentially were copyrights rather than physical products, theoretically the costs of additional sales were zero. Film production involved high endogenous sunk costs, recouped through renting the copyright to the film. The marginal foreign revenue equaled marginal net revenue (and marginal profits after the film’s production costs had been fully amortized). All companies large or small had to take into account foreign sales when setting film budgets (Bakker 2004b).

Films were intermediate products sold to foreign distributors and cinemas. While the rent paid varied depending on perceived quality and general conditions of supply and demand, the ticket price paid by consumers generally did not vary. It only varied by cinema: highest in first-run city center cinemas and lowest in sixth-run ramshackle neighborhood cinemas. Cinemas used films to produce ‘spectator-hours’: a five-hundred-seat cinema providing one hour of film, produced five hundred spectator-hours of entertainment. If it sold three hundred tickets, the other two hundred spectator-hours produced would have perished.

Because film was an intermediate product and a capital good at that, international competition could not be on price alone, just as sales of machines depend on the price/performance ratio. If we consider a film’s ‘capacity to sell spectator-hours’ (hereafter called selling capacity) as proportional to production costs, a low-budget producer could not simply push down a film’s rental price in line with its quality in order to make a sale; even at a price of zero, some low-budget films could not be sold. The reasons were twofold.

First, because cinemas had mostly fixed costs and few variable costs, a film’s selling capacity needed to be at least as large as fixed cinema costs plus its rental price. A seven-hundred-seat cinema, with a production capacity of 39,200 spectator-hours a week, weekly fixed costs of five hundred dollars, and an average admission price of five cents per spectator-hour, needed a film selling at least ten thousand spectator-hours, and would not be prepared to pay for that (marginal) film, because it only recouped fixed costs. Films needed a minimum selling capacity to cover cinema fixed costs. Producers could only price down low-budget films to just above the threshold level. With a lower expected selling capacity, these films could not be sold at any price.

This reasoning assumes that we know a film’s selling capacity ex ante. A main feature distinguishing foreign markets from domestic ones was that uncertainty was markedly lower: from a film’s domestic launch the audience appeal was known, and each subsequent country added additional information. While a film’s audience appeal across countries was not perfectly correlated, uncertainty was reduced. For various companies, correlations between foreign and domestic revenues for entire film portfolios fluctuated between 0.60 and 0.95 (Bakker 2004b). Given the riskiness of film production, this reduction in uncertainty undoubtedly was important.

The second reason for limited price competition was the opportunity cost, given cinemas’ production capacities. If the hypothetical cinema obtained a high-capacity film for a weekly rental of twelve hundred dollars, which sold all 39,200 spectator-hours, the cinema made a profit of $260 (($0.05 times 39,200) – $1,200 – $500 = $260). If a film with half the budget and, we assume, half the selling capacity, rented for half the price, the cinema-owner would lose $120 (($0.05 times 19,600) – $600 – $500 = -$120). Thus, the cinema owner would want to pay no more than $220 for the lower budget film, given that the high budget film is available (($0.05 times 19,600) – $220- $500 = $260). So the low-capacity film with half the selling capacity of the high-capacity film would need to sell for under a fifth of the price of the high capacity film to even enable the possibility of a transaction.

These sharply increasing returns to selling capacity made the setting of production outlays important, as a right price/capacity ratio was crucial to win foreign markets.

How Films Became Branded Products

To make sure film revenues reached above cinema fixed costs, film companies transformed films into branded products. With the emergence of the feature film, they started to pay large sums to actors, actresses and directors and for rights to famous plays and novels. This is still a major characteristic of the film industry today that fascinates many people. Yet the huge sums paid for stars and stories are not as irrational and haphazard as they sometimes may seem. Actually, they might be just as ‘rational’ and have just as quantifiable a return as direct spending on marketing and promotion (Bakker 2001a).

To secure an audience, film producers borrowed branding techniques from other consumer goods’ industries, but the short product-life-cycle forced them to extend the brand beyond one product – using trademarks or stars – to buy existing ‘brands,’ such as famous plays or novels, and to deepen the product-life-cycle by licensing their brands.

Thus, the main value of stars and stories lay not in their ability to predict successes, but in their services as giant ‘publicity machines’ which optimized advertising effectiveness by rapidly amassing high levels of brand-awareness. After a film’s release, information such as word-of-mouth and reviews would affect its success. The young age at which stars reached their peak, and the disproportionate income distribution even among the superstars, confirm that stars were paid for their ability to generate publicity. Likewise, because ‘stories’ were paid several times as much as original screenplays, they were at least partially bought for their popular appeal (Bakker 2001a).

Stars and stories marked a film’s qualities to some extent, confirming that they at least contained themselves. Consumer preferences confirm that stars and stories were the main reason to see a film. Further, fame of stars is distributed disproportionately, possibly even twice as unequal as income. Film companies, aided by long-term contracts, probably captured part of the rent of their popularity. Gradually these companies specialized in developing and leasing their ‘instant brands’ to other consumer goods’ industries in the form of merchandising.

Already from the late 1930s onwards, the Hollywood studios used the new scientific market research techniques of George Gallup to continuously track the brand-awareness among the public of their major stars (Bakker 2003). Figure 4 is based on one such graph used by Hollywood. It shows that Lana Turner was a rising star, Gable was consistently a top star, while Stewart’s popularity was high but volatile. James Stewart was eleven percentage-points more popular among the richest consumers than among the poorest, while Lana Turner differed only a few percentage-points. Additional segmentation by city size seemed to matter, since substantial differences were found: Clark Gable was ten percentage-points more popular in small cities than in large ones. Of the richest consumers, 51 percent wanted to see a movie starring Gable, but altogether they constituted just 14 percent of Gable’s market, while the 57 percent poorest Gable-fans constituted 34 percent. The increases in Gable’s popularity roughly coincided with his releases, suggesting that while producers used Gable partially for the brand-awareness of his name, each use (film) subsequently increased or maintained that awareness in what seems to have been a self-reinforcing process.

Figure 4

Popularity of Clark Gable, James Stewart and Lana Turner among U.S. respondents

April 1940 – October 1942, in percentage

Source: Audience Research Inc.; Bakker 2003.

The Film Industry’s Contribution to Economic Growth and Welfare

By the late 1930s, cinema had become an important mass entertainment industry. Nearly everyone in the Western world went to the cinema and many at least once a week. Cinema had made possible a massive growth in productivity in the entertainment industry and thereby disproved the notions of some economists that productivity growth in certain service industries is inherently impossible. Between 1900 and 1938, output of the entertainment industry, measured in spectator-hours, grew substantially in the U.S., Britain and France, varying from three to eleven percent per year over a period of nearly forty years (Table 1). The output per worker increased from 2,453 spectator hours in the U.S. in 1900 to 34,879 in 1938. In Britain it increased from 16,404 to 37,537 spectator-hours and in France from 1,575 to 8,175 spectator-hours. This phenomenal growth could be explained partially by adding more capital (such as in the form of film technology and film production outlays) and partially by simply producing more efficiently with the existing amount of capital and labor. The increase in efficiency (‘total factor productivity’) varied from about one percent per year in Britain to over five percent in the U.S., with France somewhere in between. In all countries, this increase in efficiency was at least one and a half times the increase in efficiency at the level of the entire nation. For the U.S. it was as much as five times and for France it was more than three times the national increase in efficiency (Bakker 2004a).

Another noteworthy feature is that the labor productivity in entertainment varied less across countries in the late 1930s than it did in 1900. Part of the reason is that cinema technology made entertainment partially tradable and therefore forced productivity in similar directions in all countries; the tradable part of the entertainment industry would now exert competitive pressure on the non-tradable part (Bakker 2004a). It is therefore not surprising that cinema caused the lowest efficiency increase in Britain, which had already a well-developed and competitive entertainment industry (with the highest labor and capital productivity both in 1900 and in 1938) and higher efficiency increases in the U.S. and to a lesser extent in France, which had less well-developed entertainment industries in 1900.

Another way to measure the contribution of film technology to the economy in the late 1930s is by using a social savings methodology. If we assume that cinema did not exist and all demand for entertainment (measured in spectator-hours) would have to be met by live entertainment, we can calculate the extra costs to society and thus the amount saved by film technology. In the U.S., these social savings amounted to as much as 2.2 percent ($2.5 billion) of GDP, in France to just 1.4 percent ($0.16 billion) and in Britain to only 0.3 percent ($0.07 billion) of GDP.

A third and different way to look at the contribution of film technology to the economy is to look at the consumer surplus generated by cinema. Contrary to the TFP and social savings techniques used above, which assume that cinema is a substitute for live entertainment, this approach assumes that cinema is a wholly new good and that therefore the entire consumer surplus generated by it is ‘new’ and would not have existed without cinema. For an individual consumer, the surplus is the difference between the price she was willing to pay and the ticket she actually paid. This difference varies from consumer to consumer, but with econometric techniques, one can estimate the sum of individual surpluses for an entire country. The resulting national consumer surpluses for entertainment varied from about a fifth of total entertainment expenditure in the U.S., to about half in Britain and as much as three quarters in France.

All the measures show that by the late 1930s cinema was making an essential contribution in increasing total welfare as well as the entertainment industry’s productivity.

Vertical Disintegration

After the Second World War, the Hollywood film industry disintegrated: production, distribution and exhibition became separate activities that were not always owned by the same organization. Three main causes brought about the vertical disintegration. First, the U.S. Supreme Court forced the studios to divest their cinema chains in 1948. Second, changes in the social-demographic structure in the U.S. brought about a shift towards entertainment within the home: many young couples started to live in the new suburbs and wanted to stay home for entertainment. Initially, they mainly used radio for this purpose and later they switched to television (Gomery 1985). Third, television broadcasting in itself (without the social-demographic changes that increased demand for it) constituted a new distribution channel for audiovisual entertainment and thus decreased the scarcity of distribution capacity. This meant that television took over the focus on the lowest common denominator from radio and cinema, while the latter two differentiated their output and started to focus more on specific market segments.

Figure 5

Real Cinema Box Office Revenue, Real Ticket Price and Number of Screens in the U.S., 1945-2002

Note: The values are in dollars of 2002, using the EH.Net consumer price deflator.

Source: Adapted from Vogel 2004 and Robertson 2001.

The consequence was a sharp fall in real box office revenue in the decade after the war (Figure 5). After the mid-1950s, real revenue stabilized, and remained the same, with some fluctuations, until the mid-1990s. The decline in screens was more limited. After 1963 the number of screens increased again steadily to reach nearly twice the 1945 level in the 1990s. Since the 1990s there have been more movie screens in the U.S. than ever before. The proliferation of screens, coinciding with declining capacity per screen, facilitated market segmentation. Revenue per screen nearly halved in the decade after the war, then made a rebound during the 1960s, to start a long and steady decline from 1970 onwards. The real price of a cinema ticket was quite stable until the 1960s, after which it more than doubled. Since the early 1970s, the price has been declining again and nowadays the real admission price is about what it was in 1965.

It was in this adverse post-war climate that the vertical disintegration unfolded. It took place at three levels. First (obviously) the Hollywood studios divested their cinema-chains. Second, they outsourced part of their film production and most of their production factors to independent companies. This meant that the Hollywood studios would only produce part of the films they distributed themselves, that they changed the long-term, seven-year contracts with star actors for per-film contracts and that they sold off part of their studio facilities to rent them back for individual films. Third, the Hollywood studios’ main business became film distribution and financing. They specialized in planning and assembling a portfolio of films, contracting and financing most of them and marketing and distributing them world-wide.

The developments had three important effects. First, production by a few large companies was replaced by production by many small flexibly specialized companies. Southern California became an industrial district for the film industry and harbored an intricate network of these businesses, from set design companies and costume makers, to special effects firms and equipment rental outfits (Storper and Christopherson 1989). Only at the level of distribution and financing did concentration remain high. Second, films became more differentiated and tailored to specific market segments; they were now aimed at a younger and more affluent audience. Third, the European film market gained in importance: because the social-demographic changes (suburbanization) and the advent of television happened somewhat later in Europe, the drop in cinema attendance also happened later there. The result was that the Hollywood off-shored a large chunk – at times over half – of their production to Europe in the 1960s. This was stimulated by lower European production costs, difficulties in repatriating foreign film revenues and by the vertical disintegration in California, which severed the studios’ ties with their production units and facilitated outside contracting.

European production companies could better adapt to changes in post-war demand because they were already flexibly specialized. The British film production industry, for example, had been fragmented almost from its emergence in the 1890s. In the late 1930s, distribution became concentrated, mainly through the efforts of J. Arthur Rank, while the production sector, a network of flexibly specialized companies in and around London, boomed. After the war, the drop in admissions followed the U.S. with about a ten year delay (Figure 6). The drop in the number of screens experienced the same lag, but was more severe: about two-third of British cinema screens disappeared, versus only one-third in the U.S. In France, after the First World War film production had disintegrated rapidly and chaotically into a network of numerous small companies, while a few large firms dominated distribution and production finance. The result was a burgeoning industry, actually one of the fastest growing French industries in the 1930s.

Figure 6

Admissions and Number of Screens in Britain, 1945-2005

Source: Screen Digest/Screen Finance/British Film Institute and Robertson 2001.

Several European companies attempted to (re-)enter international film distribution, such as Rank in the 1930s and 1950s, the International Film Finance Corporation in the 1960s, Gaumont in the 1970s, PolyGram in the 1970s and again in the 1990s, Cannon in the 1980s. All of them failed in terms of long-run survival, even if they made profits during some years. The only postwar entry strategy that was successful in terms of survival was the direct acquisition of a Hollywood studio (Bakker 2000).

The Come-Back of Hollywood

From the mid-1970s onwards, the Hollywood studios revived. The slide of box office revenue was brought to a standstill. Revenues were stabilized by the joint effect of seven different factors. First, the blockbuster movie increased cinema attendance. This movie was heavily marketed and supported by intensive television advertisement. Jaws was one of the first of these kind of movies and an enormous success. Second, the U.S. film industry received several kinds of tax breaks from the early 1970s onwards, which were kept in force until the mid-1980s, when Hollywood was in good shape again. Third, coinciding with the blockbuster movie and tax-breaks film budgets increased substantially, resulting in a higher perceived quality and higher quality difference with television, drawing more consumers into the cinema. Fourth, a rise in multiplex cinemas, cinemas with several screens, increased consumer choice and increased the appeal of cinema by offering more variety within a specific cinema, thus decreasing the difference with television in this respect. Fifth, one could argue that the process of flexible specialization of the California film industry was completed in the early 1970s, thus making the film industry ready to adapt more flexibly to changes in the market. MGM’s sale of its studio complex in 1970 marked the final ending of an era. Sixth, new income streams from video sales and rentals and cable television increased the revenues a high-quality film could generate. Seventh, European broadcasting deregulation increased the demand for films by television stations substantially.

From the 1990s onwards further growth was driven by newer markets in Eastern Europe and Asia. Film industries from outside the West also grew substantially, such as those of Japan, Hong Kong, India and China. At the same time, the European Union started a large scale subsidy program for its audiovisual film industry, with mixed economic effects. By 1997, ten years after the start of the program, a film made in the European Union cost 500,000 euros on average, was seventy to eighty percent state-financed, and grossed 800,000 euros world-wide, reaching an audience of 150,000 persons. In contrast, the average American film cost fifteen million euros, was nearly hundred percent privately financed, grossed 58 million euros, and reached 10.5 million persons (Dale 1997). This seventy-fold difference in performance is remarkable. Even when measured in gross return on investment or gross margin, the U.S. still had a fivefold and twofold lead over Europe, respectively.[1] In few other industries does such a pronounced difference exist.

During the 1990s, the film industry moved into television broadcasting. In Europe, broadcasters often co-funded small-scale boutique film production. In the U.S., the Hollywood studios started to merge with broadcasters. In the 1950s they had experienced difficulties with obtaining broadcasting licenses, because their reputation had been compromised by the antitrust actions. They had to wait for forty years before they could finally complete what they intended.[2] Disney, for example, bought the ABC network, Paramount’s owner Viacom bought CBS, and General Electric, owner of NBC, bought Universal. At the same time, the feature film industry was also becoming more connected to other entertainment industries, such as videogames, theme parks and musicals. With video game revenues now exceeding films’ box office revenues, it seems likely that feature films will simply be the flagship part of large entertainment supply system that will exploit the intellectual property in feature films in many different formats and markets.

Conclusion

The take-off of the film industry in the early twentieth century had been driven mainly by changes in demand. Cinema industrialized entertainment by standardizing it, automating it and making it tradable. After its early years, the industry experienced a quality race that led to increasing industrial concentration. Only later did geographical concentration take place, in Southern California. Cinema made a substantial contribution to productivity and total welfare, especially before television. After television, the industry experienced vertical disintegration, the flexible specialization of production, and a self-reinforcing process of increasing distribution channels and capacity as well as market growth. Cinema, then, was not only the first in a row of media industries that industrialized entertainment, but also the first in a series of international industries that industrialized services. The evolution of the film industry thus may give insight into technological change and its attendant welfare gains in many service industries to come.

Selected Bibliography

Allen, Robert C. Vaudeville and Film, 1895-1915. New York: Arno Press, 1980.

Bächlin, Peter, Der Film als Ware. Basel: Burg-Verlag, 1945.

Bakker, Gerben, “American Dreams: The European Film Industry from Dominance to Decline.” EUI Review (2000): 28-36.

Bakker, Gerben. “Stars and Stories: How Films Became Branded Products.” Enterprise and Society 2, no. 3 (2001a): 461-502.

Bakker, Gerben. Entertainment Industrialised: The Emergence of the International Film Industry, 1890-1940. Ph.D. dissertation, European University Institute, 2001b.

Bakker, Gerben. “Building Knowledge about the Consumer: The Emergence of Market Research in the Motion Picture Industry.” Business History 45, no. 1 (2003): 101-27.

Bakker, Gerben. “At the Origins of Increased Productivity Growth in Services: Productivity, Social Savings and the Consumer Surplus of the Film Industry, 1900-1938.” Working Papers in Economic History, No. 81, Department of Economic History, London School of Economics, 2004a.

Bakker, Gerben. “Selling French Films on Foreign Markets: The International Strategy of a Medium-Sized Film Company.” Enterprise and Society 5 (2004b): 45-76.

Bakker, Gerben. “The Decline and Fall of the European Film Industry: Sunk Costs, Market Size and Market Structure, 1895-1926.” Economic History Review 58, no. 2 (2005): 311-52.

Caves, Richard E. Creative Industries: Contracts between Art and Commerce. Cambridge, MA: Harvard University Press, 2000.

Christopherson, Susan, and Michael Storper. “Flexible Specialization and Regional Agglomerations: The Case of the U.S. Motion Picture Industry.” Annals of the Association of American Geographers 77, no. 1 (1987).

Christopherson, Susan, and Michael Storper. “The Effects of Flexible Specialization on Industrial Politics and the Labor Market: The Motion Picture Industry.” Industrial and Labor Relations Review 42, no. 3 (1989): 331-47.

Gomery, Douglas, The Coming of Sound to the American Cinema: A History of the Transformation of an Industry. Ph.D. dissertation, University of Wisconsin, 1975.

Gomery, Douglas, “The Coming of television and the ‘Lost’ Motion Picture Audience.” Journal of Film and Video 37, no. 3 (1985): 5-11.

Gomery, Douglas. The Hollywood Studio System. London: MacMillan/British Film Institute, 1986; reprinted 2005.

Kraft, James P. Stage to Studio: Musicians and the Sound Revolution, 1890-1950. Baltimore: Johns Hopkins University Press, 1996.

Krugman, Paul R., and Maurice Obstfeld, International Economics: Theory and Policy (sixth edition). Reading, MA: Addison-Wesley, 2003.

Low, Rachael, and Roger Manvell, The History of the British Film, 1896-1906. London, George Allen & Unwin, 1948.

Michaelis, Anthony R. “The Photographic Arts: Cinematography.” In A History of Technology, Vol. V: The Late Nineteenth Century, c. 1850 to c. 1900, edited by Charles Singer, 734-51. Oxford, Clarendon Press, 1958, reprint 1980.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990.

Musser, Charles. The Emergence of Cinema: The American Screen to 1907. The History of American Cinema, Vol. I. New York: Scribner, 1990.

Sedgwick, John, “Product Differentiation at the Movies: Hollywood, 1946-65.” Journal of Economic History 63 (2002): 676-705.

Sedgwick, John, and Michael Pokorny. “The Film Business in Britain and the United States during the 1930s.” Economic History Review 57, no. 1 (2005): 79-112.

Sedgwick, John, and Mike Pokorny, editors. An Economic History of Film. London: Routledge, 2004.

Thompson, Kristin.. Exporting Entertainment: America in the World Film Market, 1907-1934. London: British Film Institute, 1985.

Vogel, Harold L. Entertainment Industry Economics: A Guide for Financial Analysis. Cambridge: Cambridge University Press, Sixth Edition, 2004.

Gerben Bakker may be contacted at gbakker at essex.ac.uk


[1] Gross return on investment, disregarding interest costs and distribution charges was 60 percent for European vs. 287 percent for U.S. films. Gross margin was 37 percent for European vs. 74 percent for U.S. films. Costs per viewer are 3.33 vs. 1.43 euros, revenues per viewer are 5.30 vs. 5.52 euros.

[2] The author is indebted to Douglas Gomery for this point.

Citation: Bakker, Gerben. “The Economic History of the International Film Industry”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-international-film-industry/

An Economic History of Denmark

Ingrid Henriksen, University of Copenhagen

Denmark is located in Northern Europe between the North Sea and the Baltic. Today Denmark consists of the Jutland Peninsula bordering Germany and the Danish Isles and covers 43,069 square kilometers (16,629 square miles). 1 The present nation is the result of several cessions of territory throughout history. The last of the former Danish territories in southern Sweden were lost to Sweden in 1658, following one of the numerous wars between the two nations, which especially marred the sixteenth and seventeenth centuries. Following defeat in the Napoleonic Wars, Norway was separated from Denmark in 1814. After the last major war, the Second Schleswig War in 1864, Danish territory was further reduced by a third when Schleswig and Holstein were ceded to Germany. After a regional referendum in 1920 only North-Schleswig returned to Denmark. Finally, Iceland, withdrew from the union with Denmark in 1944. The following will deal with the geographical unit of today’s Denmark.

Prerequisites of Growth

Throughout history a number of advantageous factors have shaped the Danish economy. From this perspective it may not be surprising to find today’s Denmark among the richest societies in the world. According to the OECD, it ranked seventh in 2004, with income of $29.231 per capita (PPP). Although we can identify a number of turning points and breaks, for the time period over which we have quantitative evidence this long-run position has changed little. Thus Maddison (2001) in his estimate of GDP per capita around 1600 places Denmark as number six. One interpretation could be that favorable circumstances, rather than ingenious institutions or policies, have determined Danish economic development. Nevertheless, this article also deals with time periods in which the Danish economy was either diverging from or converging towards the leading economies.

Table 1:
Average Annual GDP Growth (at factor costs)
Total Per capita
1870-1880 1.9% 0.9%
1880-1890 2.5% 1.5%
1890-1900 2.9% 1.8%
1900-1913 3.2% 2.0%
1913-1929 3.0% 1.6%
1929-1938 2.2% 1.4%
1938-1950 2.4% 1.4%
1950-1960 3.4% 2.6%
1960-1973 4.6% 3.8%
1973-1982 1.5% 1.3%
1982-1993 1.6% 1.5%
1993-2004 2.2% 2.0%

Sources: Johansen (1985) and Statistics Denmark ‘Statistikbanken’ online.

Denmark’s geographical location in close proximity of the most dynamic nations of sixteenth-century Europe, the Netherlands and the United Kingdom, no doubt exerted a positive influence on the Danish economy and Danish institutions. The North German area influenced Denmark both through long-term economic links and through the Lutheran Protestant Reformation which the Danes embraced in 1536.

The Danish economy traditionally specialized in agriculture like most other small and medium-sized European countries. It is, however, rather unique to find a rich European country in the late-nineteenth and mid-twentieth century which retained such a strong agrarian bias. Only in the late 1950s did the workforce of manufacturing industry overtake that of agriculture. Thus an economic history of Denmark must take its point of departure in agricultural development for quite a long stretch of time.

Looking at resource endowments, Denmark enjoyed a relatively high agricultural land-to-labor ratio compared to other European countries, with the exception of the UK. This was significant for several reasons since it, in this case, was accompanied by a comparatively wealthy peasantry.

Denmark had no mineral resources to speak of until the exploitation of oil and gas in the North Sea began in 1972 and 1984, respectively. From 1991 on Denmark has been a net exporter of energy although on a very modest scale compared to neighboring Norway and Britain. The small deposits are currently projected to be depleted by the end of the second decade of the twenty-first century.

Figure 1. Percent of GDP in selected=

Source: Johansen (1985) and Statistics Denmark ’Nationalregnskaber’

Good logistic can be regarded as a resource in pre-industrial economies. The Danish coast line of 7,314 km and the fact that no point is more than 50 km from the sea were advantages in an age in which transport by sea was more economical than land transport.

Decline and Transformation, 1500-1750

The year of the Lutheran Reformation (1536) conventionally marks the end of the Middle Ages in Danish historiography. Only around 1500 did population growth begin to pick up after the devastating effect of the Black Death. Growth thereafter was modest and at times probably stagnant with large fluctuations in mortality following major wars, particularly during the seventeenth century, and years of bad harvests. About 80-85 percent of the population lived from subsistence agriculture in small rural communities and this did not change. Exports are estimated to have been about 5 percent of GDP between 1550 and 1650. The main export products were oxen and grain. The period after 1650 was characterized by a long lasting slump with a marked decline in exports to the neighboring countries, the Netherlands in particular.

The institutional development after the Black Death showed a return to more archaic forms. Unlike other parts of northwestern Europe, the peasantry on the Danish Isles afterwards became a victim of a process of re-feudalization during the last decades of the fifteenth century. A likely explanation is the low population density that encouraged large landowners to hold on to their labor by all means. Freehold tenure among peasants effectively disappeared during the seventeenth century. Institutions like bonded labor that forced peasants to stay on the estate where they were born, and labor services on the demesne as part of the land rent bring to mind similar arrangements in Europe east of the Elbe River. One exception to the East European model was crucial, however. The demesne land, that is the land worked directly under the estate, never made up more than nine percent of total land by the mid eighteenth century. Although some estate owners saw an interest in encroaching on peasant land, the state protected the latter as production units and, more importantly, as a tax base. Bonded labor was codified in the all-encompassing Danish Law of Christian V in 1683. It was further intensified by being extended, though under another label, to all Denmark during 1733-88, as a means for the state to tide the large landlords over an agrarian crisis. One explanation for the long life of such an authoritarian institution could be that the tenants were relatively well off, with 25-50 acres of land on average. Another reason could be that reality differed from the formal rigor of the institutions.

Following the Protestant Reformation in 1536, the Crown took over all church land, thereby making it the owner of 50 percent of all land. The costs of warfare during most of the sixteenth century could still be covered by the revenue of these substantial possessions. Around 1600 the income from taxation and customs, mostly Sound Toll collected from ships that passed the narrow strait between Denmark and today’s Sweden, on the one hand and Crown land revenues on the other were equally large. About 50 years later, after a major fiscal crisis had led to the sale of about half of all Crown lands, the revenue from royal demesnes declined relatively to about one third, and after 1660 the full transition from domain state to tax state was completed.

The bulk of the former Crown land had been sold to nobles and a few common owners of estates. Consequently, although the Danish constitution of 1665 was the most stringent version of absolutism found anywhere in Europe at the time, the Crown depended heavily on estate owners to perform a number of important local tasks. Thus, conscription of troops for warfare, collection of land taxes and maintenance of law and order enhanced the landlords’ power over their tenants.

Reform and International Market Integration, 1750-1870

The driving force of Danish economic growth, which took off during the late eighteenth century was population growth at home and abroad – which triggered technological and institutional innovation. Whereas the Danish population during the previous hundred years grew by about 0.4 percent per annum, growth climbed to about 0.6 percent, accelerating after 1775 and especially from the second decade of the nineteenth century (Johansen 2002). Like elsewhere in Northern Europe, accelerating growth can be ascribed to a decline in mortality, mainly child mortality. Probably this development was initiated by fewer spells of epidemic diseases due to fewer wars and to greater inherited immunity against contagious diseases. Vaccination against smallpox and formal education of midwives from the early nineteenth century might have played a role (Banggård 2004). Land reforms that entailed some scattering of the farm population may also have had a positive influence. Prices rose from the late eighteenth century in response to the increase in populations in Northern Europe, but also following a number of international conflicts. This again caused a boom in Danish transit shipping and in grain exports.

Population growth rendered the old institutional set up obsolete. Landlords no longer needed to bind labor to their estate, as a new class of landless laborers or cottagers with little land emerged. The work of these day-laborers was to replace the labor services of tenant farmers on the demesnes. The old system of labor services obviously presented an incentive problem all the more since it was often carried by the live-in servants of the tenant farmers. Thus, the labor days on the demesnes represented a loss to both landlords and tenants (Henriksen 2003). Part of the land rent was originally paid in grain. Some of it had been converted to money which meant that real rents declined during the inflation. The solution to these problems was massive land sales both from the remaining crown lands and from private landlords to their tenants. As a result two-thirds of all Danish farmers became owner-occupiers compared to only ten percent in the mid-eighteenth century. This development was halted during the next two and a half decades but resumed as the business cycle picked up during the 1840s and 1850s. It was to become of vital importance to the modernization of Danish agriculture towards the end of the nineteenth century that 75 percent of all agricultural land was farmed by owners of middle-sized farms of about 50 acres. Population growth may also have put a pressure on common lands in the villages. At any rate enclosure begun in the 1760s, accelerated in the 1790s supported by legislation and was almost complete in the third decade of the nineteenth century.

The initiative for the sweeping land reforms from the 1780s is thought to have come from below – that is from the landlords and in some instances also from the peasantry. The absolute monarch and his counselors were, however, strongly supportive of these measures. The desire for peasant land as a tax base weighed heavily and the reforms were believed to enhance the efficiency of peasant farming. Besides, the central government was by now more powerful than in the preceding centuries and less dependent on landlords for local administrative tasks.

Production per capita rose modestly before the 1830s and more pronouncedly thereafter when a better allocation of labor and land followed the reforms and when some new crops like clover and potatoes were introduced at a larger scale. Most importantly, the Danes no longer lived at the margin of hunger. No longer do we find a correlation between demographic variables, deaths and births, and bad harvest years (Johansen 2002).

A liberalization of import tariffs in 1797 marked the end of a short spell of late mercantilism. Further liberalizations during the nineteenth and the beginning of the twentieth century established the Danish liberal tradition in international trade that was only to be broken by the protectionism of the 1930s.

Following the loss of the secured Norwegian market for grain in 1814, Danish exports began to target the British market. The great rush forward came as the British Corn Law was repealed in 1846. The export share of the production value in agriculture rose from roughly 10 to around 30 percent between 1800 and 1870.

In 1849 absolute monarchy was peacefully replaced by a free constitution. The long-term benefits of fundamental principles such as the inviolability of private property rights, the freedom of contracting and the freedom of association were probably essential to future growth though hard to quantify.

Modernization and Convergence, 1870-1914

During this period Danish economic growth outperformed that of most other European countries. A convergence in real wages towards the richest countries, Britain and the U.S., as shown by O’Rourke and Williamsson (1999), can only in part be explained by open economy forces. Denmark became a net importer of foreign capital from the 1890s and foreign debt was well above 40 percent of GDP on the eve of WWI. Overseas emigration reduced the potential workforce but as mortality declined population growth stayed around one percent per annum. The increase in foreign trade was substantial, as in many other economies during the heyday of the gold standard. Thus the export share of Danish agriculture surged to a 60 percent.

The background for the latter development has featured prominently in many international comparative analyses. Part of the explanation for the success, as in other Protestant parts of Northern Europe, was a high rate of literacy that allowed a fast spread of new ideas and new technology.

The driving force of growth was that of a small open economy, which responded effectively to a change in international product prices, in this instance caused by the invasion of cheap grain to Western Europe from North America and Eastern Europe. Like Britain, the Netherlands and Belgium, Denmark did not impose a tariff on grain, in spite of the strong agrarian dominance in society and politics.

Proposals to impose tariffs on grain, and later on cattle and butter, were turned down by Danish farmers. The majority seems to have realized the advantages accruing from the free imports of cheap animal feed during the ongoing process of transition from vegetable to animal production, at a time when the prices of animal products did not decline as much as grain prices. The dominant middle-sized farm was inefficient for wheat but had its comparative advantage in intensive animal farming with the given technology. O’Rourke (1997) found that the grain invasion only lowered Danish rents by 4-5 percent, while real wages rose (according to expectation) but more than in any other agrarian economy and more than in industrialized Britain.

The move from grain exports to exports of animal products, mainly butter and bacon, was to a great extent facilitated by the spread of agricultural cooperatives. This organization allowed the middle-sized and small farms that dominated Danish agriculture to benefit from the economy of scale in processing and marketing. The newly invented steam-driven continuous cream separator skimmed more cream from a kilo of milk than conventional methods and had the further advantage of allowing transported milk brought together from a number of suppliers to be skimmed. From the 1880s the majority of these creameries in Denmark were established as cooperatives and about 20 years later, in 1903, the owners of 81 percent of all milk cows supplied to a cooperative (Henriksen 1999). The Danish dairy industry captured over a third of the rapidly expanding British butter-import market, establishing a reputation for consistent quality that was reflected in high prices. Furthermore, the cooperatives played an active role in persuading the dairy farmers to expand production from summer to year-round dairying. The costs of intensive feeding during the wintertime were more than made up for by a winter price premium (Henriksen and O’Rourke 2005). Year-round dairying resulted in a higher rate of utilization of agrarian capital – that is of farm animals and of the modern cooperative creameries. Not least did this intensive production mean a higher utilization of hitherto underemployed labor. From the late 1890’s, in particular, labor productivity in agriculture rose at an unanticipated speed at par with productivity increase in the urban trades.

Industrialization in Denmark took its modest beginning in the 1870s with a temporary acceleration in the late 1890s. It may be a prime example of an industrialization process governed by domestic demand for industrial goods. Industry’s export never exceeded 10 percent of value added before 1914, compared to agriculture’s export share of 60 percent. The export drive of agriculture towards the end of the nineteenth century was a major force in developing other sectors of the economy not least transport, trade and finance.

Weathering War and Depression, 1914-1950

Denmark, as a neutral nation, escaped the devastating effects of World War I and was even allowed to carry on exports to both sides in the conflict. The ensuing trade surplus resulted in a trebling of the money supply. As the monetary authorities failed to contain the inflationary effects of this development, the value of the Danish currency slumped to about 60 percent of its pre-war value in 1920. The effects of monetary policy failure were aggravated by a decision to return to the gold standard at the 1913 level. When monetary policy was finally tightened in 1924, it resulted in fierce speculation in an appreciation of the Krone. During 1925-26 the currency returned quickly to its pre-war parity. As this was not counterbalanced by an equal decline in prices, the result was a sharp real appreciation and a subsequent deterioration in Denmark’s competitive position (Klovland 1997).

Figure 2. Indices of the Krone Real Exchange Rate and Terms Of Trade (1980=100; Real rates based on Wholesale Price Index

Source: Abildgren (2005)

Note: Trade with Germany is included in the calculation of the real effective exchange rate for the whole period, including 1921-23.

When, in September 1931, Britain decided to leave the gold standard again, Denmark, together with Sweden and Norway, followed only a week later. This move was beneficial as the large real depreciation lead to a long-lasting improvement in Denmark’s competitiveness in the 1930s. It was, no doubt, the single most important policy decision during the depression years. Keynesian demand management, even if it had been fully understood, was barred by a small public sector, only about 13 percent of GDP. As it was, fiscal orthodoxy ruled and policy was slightly procyclical as taxes were raised to cover the deficit created by crisis and unemployment (Topp 1995).

Structural development during the 1920s, surprisingly for a rich nation at this stage, was in favor of agriculture. The total labor force in Danish agriculture grew by 5 percent from 1920 to 1930. The number of employees in agriculture was stagnating whereas the number of self-employed farmers increased by a larger number. The development in relative incomes cannot account for this trend but part of the explanation must be found in a flawed Danish land policy, which actively supported a further parceling out of land into small holdings and restricted the consolidation into larger more viable farms. It took until the early 1960s before this policy began to be unwound.

When the world depression hit Denmark with a minor time lag, agriculture still employed one-third of the total workforce while its contribution to total GDP was a bit less than one-fifth. Perhaps more importantly, agricultural goods still made up 80 percent of total exports.

Denmark’s terms of trade, as a consequence, declined by 24 percent from 1930 to 1932. In 1933 and 1934 bilateral trade agreements were forced upon Denmark by Britain and Germany. In 1932 Denmark had adopted exchange control, a harsh measure even for its time, to stem the net flow of foreign exchange out of the country. By rationing imports exchange control also offered some protection of domestic industry. At the end of the decade manufacture’s GDP had surpassed that of agriculture. In spite of the protectionist policy, unemployment soared to 13-15 percent of the workforce.

The policy mistakes during World War I and its immediate aftermath served as a lesson for policymakers during World War II. The German occupation force (April 9, 1940 until May 5, 1945) drew the funds for its sustenance and for exports to Germany on the Danish central bank whereby the money supply more than doubled. In response the Danish authorities in 1943 launched a policy of absorbing money through open market operations and, for the first time in history, through a surplus on the state budget.

Economic reconstruction after World War II was swift, as again Denmark had been spared the worst consequences of a major war. In 1946 GDP recovered its highest pre-war level. In spite of this, Denmark received relatively generous support through the Marshall Plan of 1948-52, when measured in dollars per capita.

From Riches to Crisis, 1950-1973: Liberalizations and International Integration Once Again

The growth performance during 1950-1957 was markedly lower than the Western European average. The main reason was the high share of agricultural goods in Danish exports, 63 percent in 1950. International trade in agricultural products to a large extent remained regulated. Large deteriorations in the terms of trade caused by the British devaluation 1949, when Denmark followed suit, the outbreak of the Korean War in 1950, and the Suez-crisis of 1956 made matters worse. The ensuing deficits on the balance of payment led the government to contractionary policy measures which restrained growth.

The liberalization of the flow of goods and capital in Western Europe within the framework of the OEEC (the Organization for European Economic Cooperation) during the 1950s probably dealt a blow to some of the Danish manufacturing firms, especially in the textile industry, that had been sheltered through exchange control and wartime. Nevertheless, the export share of industrial production doubled from 10 percent to 20 percent before 1957, at the same time as employment in industry surpassed agricultural employment.

On the question of European economic integration Denmark linked up with its largest trading partner, Britain. After the establishment of the European Common Market in 1958 and when the attempts to create a large European free trade area failed, Denmark entered the European Free Trade Association (EFTA) created under British leadership in 1960. When Britain was finally able to join the European Economic Community (EEC) in 1973, Denmark followed, after a referendum on the issue. Long before admission to the EEC, the advantages to Danish agriculture from the Common Agricultural Policy (CAP) had been emphasized. The higher prices within the EEC were capitalized into higher land prices at the same time that investments were increased based on the expected gains from membership. As a result the most indebted farmers who had borrowed at fixed interests rates were hit hard by two developments from the early 1980s. The EEC started to reduce the producers’ benefits of the CAP because of overproduction and, after 1982, the Danish economy adjusted to a lower level of inflation, and therefore, nominal interest rates. According to Andersen (2001) Danish farmers were left with the highest interest burden of all European Union (EU) farmers in the 1990’s.

Denmark’s relations with the EU, while enthusiastic at the beginning, have since been characterized by a certain amount of reserve. A national referendum in 1992 turned down the treaty on the European Union, the Maastricht Treaty. The Danes, then, opted out of four areas, common citizenship, a common currency, common foreign and defense politics and a common policy on police and legal matters. Once more, in 2000, adoption of the common currency, the Euro, was turned down by the Danish electorate. In the debate leading up to the referendum the possible economic advantages of the Euro in the form of lower transaction costs were considered to be modest, compared to the existent regime of fixed exchange rates vis-à-vis the Euro. All the major political parties, nevertheless, are pro-European, with only the extreme Right and the extreme Left being against. It seems that there is a discrepancy between the general public and the politicians on this particular issue.

As far as domestic economic policy is concerned, the heritage from the 1940s was a new commitment to high employment modified by a balance of payment constraint. The Danish policy differed from that of some other parts of Europe in that the remains of the planned economy from the war and reconstruction period in the form of rationing and price control were dismantled around 1950 and that no nationalizations took place.

Instead of direct regulation, economic policy relied on demand management with fiscal policy as its main instrument. Monetary policy remained a bone of contention between politicians and economists. Coordination of policies was the buzzword but within that framework monetary policy was allotted a passive role. The major political parties for a long time were wary of letting the market rate of interest clear the loan market. Instead, some quantitative measures were carried out with the purpose of dampening the demand for loans.

From Agricultural Society to Service Society: The Growth of the Welfare State

Structural problems in foreign trade extended into the high growth period of 1958-73, as Danish agricultural exports were met with constraints both from the then EEC-member countries and most EFTA countries, as well. During the same decade, the 1960s, as the importance of agriculture was declining the share of employment in the public sector grew rapidly until 1983. Building and construction also took a growing share of the workforce until 1970. These developments left manufacturing industry with a secondary position. Consequently, as pointed out by Pedersen (1995) the sheltered sectors in the economy crowded out the sectors that were exposed to international competition, that is mostly industry and agriculture, by putting a pressure on labor and other costs during the years of strong expansion.

Perhaps the most conspicuous feature of the Danish economy during the Golden Age was the steep increase in welfare-related costs from the mid 1960s and not least the corresponding increases in the number of public employees. Although the seeds of the modern Scandinavian welfare state were sown at a much earlier date, the 1960s was the time when public expenditure as a share of GDP exceeded that of most other countries.

As in other modern welfare states, important elements in the growth of the public sector during the 1960s were the expansion in public health care and education, both free for all citizens. The background for much of the increase in the number of public employees from the late 1960s was the rise in labor participation by married women from the late 1960s until about 1990, partly at least as a consequence. In response, the public day care facilities for young children and old people were expanded. Whereas in 1965 7 percent of 0-6 year olds were in a day nursery or kindergarten, this share rose to 77 per cent in 2000. This again spawned more employment opportunities for women in the public sector. Today the labor participation for women, around 75 percent of 16-66 year olds, is among the highest in the world.

Originally social welfare programs targeted low income earners who were encouraged to take out insurance against sickness (1892), unemployment (1907) and disability (1922). The public subsidized these schemes and initiated a program for the poor among old people (1891). The high unemployment period in the 1930s inspired some temporary relief and some administrative reform, but little fundamental change.

Welfare policy in the first four decades following World War II is commonly believed to have been strongly influenced by the Social Democrat party which held around 30 percent of the votes in general elections and was the party in power for long periods of time. One of the distinctive features of the Danish welfare state has been its focus on the needs of the individual person rather than on the family context. Another important characteristic is the universal nature of a number of benefits starting with a basic old age pension for all in 1956. The compensation rates in a number of schedules are high in international comparison, particularly for low income earners. Public transfers gained a larger share in total public outlays both because standards were raised – that is benefits became higher – and because the number of recipients increased dramatically following the high unemployment regime from the mid 1970s to the mid 1990s. To pay for the high transfers and the large public sector – around 30 percent of the work force – the tax load is also high in international perspective. The share public sector and social expenditure has risen to above 50 percent of GDP, only second to the share in Sweden.

Figure 3. Unemployment, Denmark (percent of total labor force)

Source: Statistics Denmark ‘50 års-oversigten’ and ADAM’s databank

The Danish labor market model has recently attracted favorable international attention (OECD 2005). It has been declared successful in fighting unemployment – especially compared to the policies of countries like Germany and France. The so-called Flexicurity model rests on three pillars. The first is low employment protection, the second is relatively high compensation rates for the unemployed and the third is the requirement for active participation by the unemployed. Low employment protection has a long tradition in Denmark and there is no change in this factor when comparing the twenty years of high unemployment – 8-12 per cent of the labor force – from the mid 1970s to the mid 1990s, to the past ten years when unemployment has declined to a mere 4.5 percent in 2006. The rules governing compensation to the unemployed were tightened from 1994, limiting the number of years the unemployed could receive benefits from 7 to 4. Most noticeably labor market policy in 1994 turned from ‘passive’ measures – besides unemployment benefits, an early retirement scheme and a temporary paid leave scheme – toward ‘active’ measures that were devoted to getting people back to work by providing training and jobs. It is commonly supposed that the strengthening of economic incentives helped to lower unemployment. However, as Andersen and Svarer (2006) point out, while unemployment has declined substantially a large and growing share of Danes of employable age receives transfers other than unemployment benefit – that is benefits related to sickness or social problems of various kinds, early retirement benefits, etc. This makes it hazardous to compare the Danish labor market model with that of many other countries.

Exchange Rates and Macroeconomic Policy

Denmark has traditionally adhered to a fixed exchange rate regime. The belief is that for a small and open economy, a floating exchange rate could lead to very volatile exchange rates which would harm foreign trade. After having abandoned the gold standard in 1931, the Danish currency (the Krone) was, for a while, pegged to the British pound, only to join the IMF system of fixed but adjustable exchange rates, the so-called Bretton Woods system after World War II. The close link with the British economy still manifested itself when the Danish currency was devaluated along with the pound in 1949 and, half way, in 1967. The devaluation also reflected that after 1960, Denmark’s international competitiveness had gradually been eroded by rising real wages, corresponding to a 30 percent real appreciation of the currency (Pedersen 1996).

When the Bretton Woods system broke down in the early 1970s, Denmark joined the European exchange rate cooperation, the “Snake” arrangement, set up in 1972, an arrangement that was to be continued in the form of the Exchange Rate Mechanism within the European Monetary System from 1979. The Deutschmark was effectively the nominal anchor in European currency cooperation until the launch of the Euro in 1999, a fact that put Danish competitiveness under severe pressure because of markedly higher inflation in Denmark compared to Germany. In the end the Danish government gave way before the pressure and undertook four discrete devaluations from 1979 to 1982. Since compensatory increases in wages were held back, the balance of trade improved perceptibly.

This improvement could, however, not make up for the soaring costs of old loans at a time when the international real rates of interests were high. The Danish devaluation strategy exacerbated this problem. The anticipation of further devaluations was mirrored in a steep increase in the long-term rate of interest. It peaked at 22 percent in nominal terms in 1982, with an interest spread to Germany of 10 percent. Combined with the effects of the second oil crisis on the Danish terms of trade, unemployment rose to 10 percent of the labor force. Given the relatively high compensation ratios for the unemployed, the public deficit increased rapidly and public debt grew to about 70 percent of GDP.

Figure 4. Current Account and Foreign Debt (Denmark)

Source: Statistics Denmark Statistical Yearbooks and ADAM’s Databank

In September 1982 the Social Democrat minority government resigned without a general election and was relieved by a Conservative-Liberal minority government. The new government launched a program to improve the competitiveness of the private sector and to rebalance public finances. An important element was a disinflationary economic policy based on fixed exchange rates pegging the Krone to the participants of the EMS and, from 1999, to the Euro. Furthermore, automatic wage indexation that had occurred, with short interruptions since 1920 (with a short lag and high coverage), was abolished. Fiscal policy was tightened, thus bringing an end to the real increases in public expenditure that had lasted since the 1960’s.

The stabilization policy was successful in bringing down inflation and long interest rates. Pedersen (1995) finds that this process, nevertheless, was slower than might have been expected. In view of former Danish exchange rate policy it took some time for the market to believe in the credible commitment to fixed exchange rates. From the late 1990s the interest spread to Germany/ Euroland has been negligible, however.

The initial success of the stabilization policy brought a boom to the Danish economy that, once again, caused overheating in the form of high wage increases (in 1987) and a deterioration of the current account. The solution to this was a number of reforms in 1986-87 aiming at encouraging private savings that had by then fallen to an historical low. Most notable was the reform that reduced tax deductibility of private interest on debts. These measures resulted in a hard landing to the economy caused by the collapse of the housing market.

The period of low growth was further prolonged by the international recession in 1992. In 1993 yet another shift of regime occurred in Danish economic policy. A new Social Democrat government decided to ‘kick start’ the economy by means of a moderate fiscal expansion whereas, in 1994, the same government tightened labor market policies substantially, as we have seen. Mainly as a consequence of these measures the Danish economy from 1994 entered a period of moderate growth with unemployment steadily falling to the level of the 1970s. A new feature that still puzzles Danish economists is that the decline in unemployment over these years has not yet resulted in any increase in wage inflation.

Denmark at the beginning of the twenty-first century in many ways fits the description of a Small Successful European Economy according to Mokyr (2006). Unlike in most of the other small economies, however, Danish exports are broad based and have no “niche” in the world market. Like some other small European countries, Ireland, Finland and Sweden, the short term economic fluctuations as described above have not followed the European business cycle very closely for the past thirty years (Andersen 2001). Domestic demand and domestic economic policy has, after all, played a crucial role even in a very small and very open economy.

References

Abildgren, Kim. “Real Effective Exchange Rates and Purchasing-Power-parity Convergence: Empirical Evidence for Denmark, 1875-2002.” Scandinavian Economic History Review 53, no. 3 (2005): 58-70.

Andersen, Torben M. et al. The Danish Economy: An international Perspective. Copenhagen: DJØF Publishing, 2001.

Andersen, Torben M. and Michael Svarer. “Flexicurity: den danska arbetsmarknadsmodellen.” Ekonomisk debatt 34, no. 1 (2006): 17-29.

Banggaard, Grethe. Befolkningsfremmende foranstaltninger og faldende børnedødelighed. Danmark, ca. 1750-1850. Odense: Syddansk Universitetsforlag, 2004

Hansen, Sv. Aage. Økonomisk vækst i Danmark: Volume I: 1720-1914 and Volume II: 1914-1983. København: Akademisk Forlag, 1984.

Henriksen, Ingrid. “Avoiding Lock-in: Cooperative Creameries in Denmark, 1882-1903.” European Review of Economic History 3, no. 1 (1999): 57-78

Henriksen, Ingrid. “Freehold Tenure in Late Eighteenth-Century Denmark.” Advances in Agricultural Economic History 2 (2003): 21-40.

Henriksen, Ingrid and Kevin H. O’Rourke. “Incentives, Technology and the Shift to Year-round Dairying in Late Nineteenth-century Denmark.” Economic History Review 58, no. 3 (2005):.520-54.

Johansen, Hans Chr. Danish Population History, 1600-1939. Odense: University Press of Southern Denmark, 2002.

Johansen, Hans Chr. Dansk historisk statistik, 1814-1980. København: Gyldendal, 1985.

Klovland, Jan T. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 3 (1998): 309-44.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001

Mokyr, Joel. “Successful Small Open Economies and the Importance of Good Institutions.” In The Road to Prosperity. An Economic History of Finland, edited by Jari Ojala, Jari Eloranta and Jukka Jalava, 8-14. Helsinki: SKS, 2006.

Pedersen, Peder J. “Postwar Growth of the Danish Economy.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. Cambridge: Cambridge University Press, 1995.

OECD, Employment Outlook, 2005.

O’Rourke, Kevin H. “The European Grain Invasion, 1870-1913.” Journal of Economic History 57, no. 4 (1997): 775-99.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-century Atlantic Economy. Cambridge, MA: MIT Press, 1999

Topp, Niels-Henrik. “Influence of the Public Sector on Activity in Denmark, 1929-39.” Scandinavian Economic History Review 43, no. 3 (1995): 339-56.


Footnotes

1 Denmark also includes the Faeroe Islands, with home rule since 1948, and Greenland, with home rule since 1979, both in the North Atlantic. These territories are left out of this account.

Citation: Henriksen, Ingrid. “An Economic History of Denmark”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2006. URL http://eh.net/encyclopedia/an-economic-history-of-denmark/

The United States Public Debt, 1861 to 1975

Franklin Noll, Ph.D.

Introduction

On January 1, 1790, the United States’ public debt stood at $52,788,722.03 (Bayley 31). It consisted of the debt of the Continental Congress and $191,608.81 borrowed by Secretary of the Treasury Alexander Hamilton in the spring of 1789 from New York banks to meet the new government’s first payroll (Bayley 108). Since then the public debt has passed by a number of historical milestones: the assumption of Revolutionary War debt in August 1790, the redemption of the debt in 1835, the financing innovations rising from Civil War in 1861, the introduction of war loan drives in 1917, the rise of deficit spending after 1932, the lasting expansion of the debt from World War II, and the passage of the Budget Control Act in 1975. (The late 1990s may mark another point of significance in the history of the public debt, but it is still too soon to tell.) This short study examines the public debt between the Civil War and the Budget Control Act, the period in which the foundations of our present public debt of over $7 trillion were laid. (See Figure 1.) We start our investigation by asking, “What exactly is the public debt?”

Source: Nominal figures from “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63 and Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahrhome.htm. Real figures adjust for inflation. These figures and conversion factors provided by Robert Sahr.

Definitions

Throughout its history, the Treasury has recognized various categories of government debt. The oldest category and the largest in size is the public debt. The public debt, simply put, is all debt for which the government of the United States is wholly liable. In turn, the general public is ultimately responsible for such debt through taxation. Some authors use the terms federal debt and national debt interchangeably with public debt. From the view of the United States Treasury, this is incorrect.

Federal debt, as defined by the Treasury, is the public debt plus debt issued by government-sponsored agencies for their own use. The term first appears in 1973 when it is officially defined as including “the obligations issued by Federal Government agencies which are part of the unified budget totals and in which there is an element of Federal ownership, along with the marketable and nonmarketable obligations of the Department of the Treasury” (Annual Report of the Secretary of the Treasury, 1973: 13). Put more succinctly, federal debt is made up of the public debt plus contingent debt. The government is partially or, more precisely, contingently liable for the debt of government-sponsored enterprises for which it has pledged its guarantee. On the contingency that a government-sponsored enterprise such as the Government National Mortgage Association ever defaults on its debt, the United States government becomes liable for the debt.

National debt, though a popular term and used by Alexander Hamilton, has never been technically defined by the Treasury. The term suggests that one is referring to all debt for which the government could be liable–wholly or in part. During the period 1861 to 1975, the debt for which the government could be partially or contingently liable has included that of government-sponsored enterprises, railroads, insular possessions (Puerto Rico and the Philippines), and the District of Columbia. Taken together, these categories of debt could be considered the true national debt which, to my knowledge, has never been calculated.

Structure

But it is the public debt–only that debt for which the government is wholly liable–which has been totaled and mathematically examined in a myriad of ways by scholars and pundits. Yet, very few have broken down the public debt into its component parts of marketable and nonmarketable debt instruments: those securities, such as bills, bonds, and notes that make up the basis of the debt. In a simplified form, the structure of the public debt is as follows:

  • Interest-bearing debt
    • Marketable debt
      • Treasuries
    • Nonmarketable debt
      • Depositary Series
    • Foreign Government Series
    • Government Account Series
    • Investment Series
    • REA Series
    • SLG Series
    • US Savings Securities
  • Matured debt
  • Debt bearing no interest

Though the elements of the debt varied over time, this basic structure remained constant from 1861 to 1975 and into the present. As we investigate further the elements making up the structure of the public debt, we will focus on information from 1975, the last year of our study. By doing so, we can see the debt at its largest and most complex for the period 1861 to 1975 and in a structure most like that currently held by the public debt. It was also in 1975 that the Bureau of the Public Debt’s accounting and reporting of the public debt took on its present form.

Some Financial Terms

Bearer Security
A bearer security is one in which ownership is determined solely by possession or the bearer of the security.
Callable
The term callable refers to whether and under what conditions the government has the right to redeem a debt issue prior to its maturity date. The date at which a security can be called by the government for redemption is known as its call date.
Coupon
A coupon is a detachable part of a security that bears the interest payment date and the amount due. The bearer of the security detaches the appropriate coupon and presents it to the Treasury for payment. Coupon is synonymous with interest in financial parlance: the coupon rate refers to the interest rate.
Coupon Security
A coupon security is any security that has attached coupons, and usually refers to a bearer security.
Discount
The term discount refers to the sale of a debt instrument at a price below its face or par value.
Liquidity
A security is liquid if it can be easily bought and sold in the secondary market or easily converted to cash.
Maturity
The maturity of a security is the date at which it becomes payable in full.
Negotiable
A negotiable security is one that can be freely sold or transferred to another holder.
Par
Par is the nominal dollar amount assigned to a security by the government. It is the security’s face value.
Premium
The term premium refers to the sale of a debt instrument at a price above its face or par value.
Registered Security
A registered security is one in which the owner of the security is recorded by the Bureau of the Public Debt. Usually both the principal and interest are registered, making them non-negotiable or non-transferable.

Interest-Bearing Debt, Matured Debt, and Debt Bearing No Interest

This major division in the structure of the public debt is fairly self-explanatory. Interest-bearing debt contains all securities that carry an obligation on the part of the government to pay interest to the security’s owner on a regular basis. These debt instruments have not reached maturity. Almost all of the public debt falls into the interest-bearing debt category. (See Figure 2.) Securities that are past maturity (and therefore no longer paying interest), but have not yet been redeemed by their holders are located within the category of matured debt. This is an extremely small part of the total public debt. In the category of debt bearing no interest are securities that are non-negotiable and non-interest-bearing such as Special Notes of the United States issued to the International Monetary Fund. Securities in this category are often issued for one-time or extraordinary purposes. Also in the category are obsolete forms of currency such as fractional currency, legal tender notes, and silver certificates. In total, old currency made up only .114% of the public debt in 1975. The Federal Reserve Notes which have been issued since 1914 and which we deal with on a daily basis are obligations of the Federal Reserve and thus not part of the public debt.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 62-63.

During the period under study, the value of outstanding matured debt generally grew with the overall size of the debt, except for a spike in the amount of unredeemed securities in the mid and late 1950s. (See Figure 3.) This was caused by the maturation of United States Savings Bonds bought during World War II. Many of these war bonds lay forgotten in people’s safe-deposit boxes for years. Wartime purchases of Defense Savings Stamps and War Savings Stamps account for much of the sudden increase in debt bearing no interest from 1943 to 1947. (See Figure 4.) The year 1947 saw the United States issuing non-interest paying notes to fund the establishment of the International Monetary Fund and the International Bank for Reconstruction and Development (part of the World Bank). As interest-bearing debt makes up over 99% of the public debt, it is basically equivalent to it. (See Figure 5.) And, the history of the overall public debt will be examined later.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 62-63.

Marketable Debt and Nonmarketable Debt

Interest-bearing debt is divided between marketable debt and nonmarketable debt. Marketable debt consists of securities that can be easily bought and sold in the secondary market. The Treasury has used the term since World War II to describe issues that are available to the general public in registered or bearer form without any condition of sale. Nonmarketable debt refers to securities that cannot be bought and sold in the secondary market though there are rare exceptions. Generally, nonmarketable government securities may only be bought from or sold to the Treasury. They are issued in registered form only and/or can be bought only by government agencies, specific business enterprises, or individuals under strict conditions.

The growth of the marketable debt largely mirrors that of total interest-bearing debt; and until 1918, there was no such thing as nonmarketable debt. (See Figure 6.) Nonmarketable debt arose in fiscal year 1918, when securities were sold to the Federal Reserve in an emergency move to raise money as the United States entered World War I. This was the first sale of “special issue” securities as nonmarketable debt securities were classified prior to World War II. Special or nonmarketable issues continued through the interwar period and grew with the establishment of government programs. Such securities were sometimes issued by the Treasury in the name of a government fund or program and were then bought by the Treasury. In effect, the Treasury extended a loan to the government entity. More often the Treasury would sell a special security to the government fund or program for cash, creating a loan to the Treasury and an investment vehicle for the government entity. And, as the number of government programs grew and the size of government funds (like those associated with Social Security) expanded, so did the number and value of nonmarketable securities–greatly contributing to the rapid growth of nonmarketable debt. By 1975, these intragovernment securities combined with United States Savings Bonds helped make nonmarketable debt 40% of the total public debt. (See Figure 7.)

Source: The following were used to calculate outstanding marketable debt: Data for 1861 to 1880 derived from Rafael A. Bayley, The National Loans of the United States from July 4, 1776, to June 30, 1880, second edition, facs rpt (New York: Burt Franklin, 1970 [1881]), 180-84 and Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1861), 44. Post-1880 numbers derived from “Analysis of the Principal of the Interest-Bearing Public Debt of the United States from July 1, 1856 to July 1, 1912,” idem (1912), 102-03; “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71. The marketable debt figures were then subtracted from total outstanding interest bearing debt to obtain nonmarketable figures.

Source: “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

Marketable Debt Securities: Treasuries

The general public is most familiar with those marketable debt instruments falling within the category of Treasury securities, more popularly known as simply Treasuries. These securities can be bought by anyone and have active secondary markets. The most commonly issued Treasuries between 1861 and 1975 are the following, listed in order of length of time to maturity, shortest to longest:

Treasury certificate of indebtedness
A couponed, short-term, interest-bearing security. It can have a maturity of as little as one day or as long as five years. Maturity is usually between 3 and 12 months. These securities were largely replaced by Treasury bills.
Treasury bill
A short-term security issued on a discount basis rather than at par. The price is determined by competitive bidding at auction. They have a maturity of a year or less and are usually sold on a weekly basis with maturities of 13 weeks and 26 weeks. They were first issued in December 1929.
Treasury note
A couponed, interest-bearing security that generally matures in 2 to 5 years. In 1968, the Treasury began to issue 7-year notes, and in 1976, the maximum maturity of Treasury notes was raised to 10 years.
Treasury bond
A couponed interest-bearing security that normally matures after 10 or more years.

The story of these securities between 1861 and 1975 is one of a general movement by the Treasury to issue ever more securities in the shorter maturities–certificates of indebtedness, bills, and notes. Until World War I, the security of preference was the bond with a call date before maturity. (See Figure 8.) Such an instrument provided the minimum attainable interest rate for the Treasury and was in demand as a long-term investment vehicle by investors. The pre-maturity call date allowed the Treasury the flexibility to redeem the bonds during a period of surplus revenue. Between 1861 and 1917, certificates of indebtedness were issued on occasion to manage cash flow through the Treasury and notes were issued only during the financial crisis years of the Civil War.

Source: Franklin Noll, A Guide to Government Obligations, 1861-1976, unpublished ms., 2004.

In terms of both numbers and values, the change to shorter maturity Treasury securities began with World War I. Unprepared for the financial demands of World War I, the Treasury was perennially short of cash and issued a great number of certificates of indebtedness and short-term notes. A market developed for these securities, and they were issued throughout the interwar period to meet cash demands and refund the remaining World War I debt. While the number of bonds issued rose in the World War I and World War II years, by 1975 bond issues had become rare; and by the late 1960s, the value of bonds issued was in steep decline. (See Figure 9.) In part, this was the effect of interest rates moving beyond statutory limits set on the interest rate the Treasury could pay on long-term securities. The primary reason for the decline of the bond, however, was post-World War II economic growth and inflation that drove up interest rates and established expectations of rising inflation. In such conditions, shorter term securities were more in favor with investors who sought to ride the rising tide of interest rates and keep their financial assets as liquid as possible. Correspondingly, the number and value of notes and bills rose throughout the postwar years. Certificates of indebtedness declined as they were replaced by bills. Treasury bills won out because they were easier and therefore less expensive for the Treasury to issue than certificates of indebtedness. Bills required no predetermination of interest rates or servicing of coupon payments.

Source: Data for 1861 to 1880 derived from Rafael A. Bayley, The National Loans of the United States from July 4, 1776, to June 30, 1880, second edition, facs rpt (New York: Burt Franklin, 1970 [1881]), 180-84 and Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1861), 44. Post-1880 numbers derived from “Analysis of the Principal of the Interest-Bearing Public Debt of the United States from July 1, 1856 to July 1, 1912,” idem (1912), 102-03; “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

Nonmarketable Debt Securities

Securities sold as nonmarketable debt come in the forms above–certificate of indebtedness, bill, note, and bond. Most, but not all, nonmarketable securities fall into these series or categories:

Depositary Series
Made up of depositary bonds held by depositary banks. These are banks that provide banking facilities for the Treasury. Depositary bonds act as collateral for the Treasury funds deposited at the bank. The interest on these collateral securities provides the banks with income for the services rendered.
Foreign Government Series
The group of Treasury securities sold to foreign governments or used in foreign exchange stabilization operations.
Government Account Series
Refers to all types of securities issued to or by government accounts and trust funds.
Investment Series
Contains Treasury Bond, Investment Series securities sold to institutional investors.
REA Series
Rural electrification Administration Series securities are sold to recipients of Rural Electrification Administration loans who have unplanned excess loan money. Holding on to excess funds in the form of bonds give the borrower the capacity to cash in the bonds and retrieve the unused loan funds without the need for negotiating a new loan.
SLG Series
State and Local Government Series securities were first issued in 1972 to help state and municipal governments meet federal arbitrage restrictions.
US Savings Securities
United States Savings Securities refers to a group of securities consisting of savings stamps and bonds (most notably United States Savings Bonds) aimed at small, non-institutional investors.

A number of nonmarketable securities fall outside these series. The special issue securities sold to the Federal Reserve in 1917 (the first securities recognized as nonmarketable) and mentioned above do not fit into any of these categories, neither do securities providing tax advantages like Mortgage Guaranty Insurance Company Tax and Loss Bonds or Special Notes of the United States issued on behalf of the International Monetary Fund. Treasury reports are, in fact, frustratingly full of anomalies and contradictions. One major anomaly is Postal Savings Bonds. First issued in 1911, Postal Savings Bonds were United States Savings Securities that were bought by depositors in the now defunct Postal Savings System. These bonds, unlike United States Savings Bonds, were fully marketable and could be bought and sold on the open market. As a savings security, it is included in the nonmarketable United States Savings Security series even though it is marketable. (It is to include these anomalous securities that we begin the graphs below in 1910.)

The United States Savings Security Series and the Government Account Series were the most significant in the growth of the nonmarketable debt component of the public debt. (See Figure 10.) The real rise in savings securities began with the introduction of the nonmarketable United States Savings Bonds in 1935. The bond drives of World War II established these savings bonds in the American psyche and small investor portfolios. Securities issued for the benefit of government funds or programs began in 1925 and, as in the case of savings securities, really took off with the stimulus of World War II. The growth of government and government programs continued to stimulate the growth of the Government Account Series, making it the largest part of nonmarketable debt by 1975. (See Figure 13.)

Source: Various tables and exhibits, Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1910-1932); “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

The Depositary, REA, and SLG series were of minor importance throughout the period with depositary bonds declining because their fixed interest rate of 2% became increasing uncompetitive with the rise in inflation. (See Figure 11.) As the Investment Series was tied to a single security, it declined with the gradual redemptions of Treasury Bond, Investment Series securities. (See Figure 12.) The Foreign Government Series grew with escalating efforts to stabilize the value of dollar in foreign exchange markets. (See Figure 12.)

Source: “Description of Public Debt Issues Outstanding, June 30, 1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 88-112.

History of the Public Debt

While we have examined the development of the various components of the public debt, we have yet to consider the public debt as a whole. Quite a few writers in the recent past have commented on the ever-growing size of the public debt. Many were concerned that the public debt figures were becoming astronomical in size and that there was no end in sight to continued growth as perennial budget deficits forced the government to keep borrowing money. Such fears are not entirely new to our country. In the Civil War, World War I, and World War II, people were astounded at the unprecedented heights reached by the public debt during wartime. What changed during World War II (and maybe a bit before) was the assumption that the public debt would decrease once the present crisis was over. The pattern in America’s past was that after each war every effort would be made to pay off the accumulated debt as quickly as possible. Thus we find after the Civil War, World War I, and World War II declines in the total public debt. (See Figures 14 and 15.) Until the United States’ entry into World War I, the public debt never exceeded $3 billion (See Figure 14); and probably the debt would have returned near to this level after World War I if the Great Depression and World War II had not intervened. Yet, the last contraction of the public debt between 1861 and 1975 occurred in 1957. (See Figures 15 and 18.) Since then the debt grew at an ever-increasing rate. Why?

The period 1861 to 1975 roughly divides into two eras and two corresponding philosophies on the public debt. From 1861 to 1932, government officials basically followed traditional precepts of public debt management, pursuing balanced budgets and paying down any debt as quickly as possible (Withers, 35-42). We will label these officials traditionalists. To oversimplify, for traditionalists the economy was not to be meddled with by the government as no good would come from it. The ups and downs of business cycles were natural phenomena that had to be endured and when possible provided for through the accumulation of budget surpluses. These views of national finance and the public debt held sway before the Great Depression and lingered on into the 1950s (Conklin, 234). But it was during the Great Depression and the first term of President Franklin Roosevelt, that we see an acceptance of what was then called “new economics” and would later be called Keynesianism. Basically, “new” economists believed that the business cycle could be counteracted through government intervention into the economy (Withers, 32). During economic downturns, the government could dampen the down cycle by stimulating the economy through lower taxes, increased government spending, and an expanded money supply. As the economy recovered, these stimulants would be reversed to dampen the up cycle of the economy. These beliefs gained ever greater currency over time and we will designate the period 1932 to 1975, the New Era.

The Traditional Era, 1861-1932

(This discussion focuses on figures 14 and 16. Also See Figures 18, 19, and 20.) In 1861, the public debt stood at roughly $65 million. At the end of the Civil War the debt was some 42 times greater at $2,756 million and the country was off the gold standard. The Civil War was paid for by a new personal income tax, massive bond issues, and the printing of currency, popularly known as Greenbacks. Once the war was over, there was a drive to return to the status quo antebellum with a return to the gold standard, a pay down of the public debt, and the retirement of Greenbacks. The period 1866 to 1893, saw 28 continuous years of budget surpluses with revenues pouring in from tariffs and land sales in the west. During that time, successive Secretaries of the Treasury redeemed public debt securities to the greatest extent possible, often buying securities at a premium in the open market. The debt declined continuously until 1893 to a low of $961 million with a brief exception in the late 1870s as the country dealt with the recessionary after effects of the Panic of 1873 and the controversy regarding resumption of the gold standard in 1879. The Panic of 1893 and a decline in tariff revenues brought a period of budget deficits and slightly raised the public debt from its 1893 low to a steady average of around $1,150 million in the years leading up to World War I. The first war drives occurred during World War I. With the aid of the recently established Federal Reserve, the Treasury held four Liberty Loan drives and one Victory Loan drive. The Treasury also introduced low cost savings certificates and stamps to attract the smallest investor. For 25 cents, one could aid the war effort by buying a Thrift Stamp. As at the end of previous wars, once World War I ended there was a concerted drive to pay down the debt. By 1931, the debt was reduced to $16,801 million from a wartime high of $25,485 million. The first budget deficit since the end of the war also appeared in 1931, marking the deepening of the Great Depression and a move away from the fiscal orthodoxy of the past.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63.

Source: Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

The New Era, 1932-1975

(This discussion focuses on figures 15 and 17. Also See Figures 18, 19, and 20.) It was Roosevelt who first experimented with deficit spending to pull the economy out of depression and to stimulate jobs through the creation of public works programs and other elements of his New Deal. Though taxes were raised on the wealthy, the depressed state of the economy meant government revenues were far too low to finance the New Deal. As a result, Roosevelt in his first year created a budget deficit almost six times greater than that of Hoover’s last year in office. Between 1931 and 1941, the public debt tripled in size, standing at $48,961 million upon the United States’ entry into World War II. To help fund the debt and get hoarded money back into circulation, the Treasury introduced the United States Savings Bond. Nonmarketable with a guaranteed redemption value at any point in the life of the security and a denomination as low as $25, the savings bond was aimed at small investors fearful of continued bank collapses. With the advent of war, these bonds became War Savings Bonds and were the focus of the eight war drives of World War II, which also included Treasury bonds and certificates of indebtedness. The public debt reached a height of $269,422 million because of the war.

The experience of the New Deal combined with the low unemployment and victory of wartime, seemed to confirm Keynesian theories and reduce the fear of budget deficits. In 1946, Congress passed the Full Employment Act, committing the government to the pursuit of low unemployment through government intervention in the economy, which could include deficit spending. Though Truman and Eisenhower promoted some government intervention in the economy, they were still economic traditionalists at heart and sought to pay down the public debt as much as possible. And, despite massive foreign aid, a sharp recession in the late 1950s, and large-scale foreign military deployments, including the Korean War, these two presidents were able to present budget surpluses more than 50% of the time and limit the growth of the public debt to an average of $1,000 million per year. From 1960 to 1975, there would only be one year of budget surplus and the public debt would grow at an average rate of $17,040 million per year. It was in 1960 and the arrival of the Kennedy administration that the “new economics” or Keynesianism came into full flower within the government. In the 1960s and 1970s, tax cuts and increased domestic spending were pursued not only to improve society but also to move the economy toward full employment. However, these economic stimulants were not just applied on down cycles of the economy but also on up cycles, resulting in ever-growing deficits. Added to this domestic spending were the continued outlays on military deployments overseas, including Vietnam, and borrowings in foreign markets to prop up the value of the dollar. During boom years, government revenues did increase but never enough to outpace spending. The exception was 1969 when a high rate of inflation boosted nominal revenues which were offset by the increased nominal cost of servicing the debt. By 1975, the United States was suffering from the high inflation and high unemployment of stagflation, and the budgetary deficits seemed to take on a life of their own. Each downturn in the economy brought smaller revenues aggravated by tax cuts while spending soared because of increased welfare and unemployment benefits and other government spending aimed at spurring job creation. The net result was an ever-increasing charge on the public debt and the huge numbers that have concerned so many in the past (and present).

Source: Nominal figures from “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63; real figures adjust for inflation and are provided by Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahrhome.htm.

Source: Derived from figures provided by Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

Source: Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

We end this study in 1975 and the passage of the Budget Control Act. Formally entitled the Congressional Budget and Impoundment Control Act of 1974, it was passed on July 12, 1974 (the start of fiscal year 1975). Some of the most notable provisions of the act were the establishment of House and Senate Budget Committees, creation of the Congressional Budget Office, and removal of impoundment authority from the President. Impoundment was the President’s ability to refrain from spending funds authorized in the budget. For example, if a government program ended up not spending all the money allotted it, the President (or more specifically the Treasury under the President’s authority) did not have to pay out the unneeded money. Or, if the President did not want to fund a project passed by Congress in the budget, he could in effect veto it by instructing the Treasury not to release the money. In sum, the Budget Control Act shifted the balance of budgetary power to the Congress from the executive branch. The effect was to weaken restraints on Congressional spending and contribute to the increased deficits and sharp, upward growth in the public debt in the next couple decades. (See Figures 1, 19, and 20.)

But the Budget Control Act was a watershed for the public debt not only in its rate of growth but also in the way it was recorded and reported. The act changed the fiscal year (the twelve-month period used to determine income and expenses for accounting purposes) from July 1 to June 30 of each year to October 1 to September 30. The Budget Control Act also initiated the reporting system currently used by the Bureau of the Public Debt to report on the public debt. Fiscal year 1975 saw the first publication of the Monthly Statement of the Public Debt of the United States. For the first time, it reported the public debt in the structure we examined above, a structure still used by the Treasury today.

Conclusion

The public debt from 1861 to 1975 was the product of many factors. First, it was the result of accountancy on the part of the United States Treasury. Only certain obligations of the United States fall into the definition of the public debt. Second, the debt was the effect of Treasury debt management decisions as to what debt instruments or securities were to be used to finance the debt. Third, the public debt was fundamentally a product of budget deficits. Massive government spending in itself did not create deficits and add to the debt. It was only when revenues were not sufficient to offset the spending that deficits and government borrowing were necessary. At times, as during wartime or severe recessions, deficits were largely unavoidable. The change that occurred between 1861 and 1975 was the attitude among the government and the public toward budget deficits. Until the Great Depression, deficits were seen as injurious to the public good, and the public debt was viewed with unease as something the country could really do without. After the Great Depression, deficits were still not welcomed but were now viewed as a necessary tool needed to aid in economic recovery and the creation of jobs. Post-World War II rising expectations of continuous economic growth and high employment at home and the extension of United States’ power abroad spurred the use of deficit spending. And, the belief among some influential Keynesians that more tinkering with the economy was all that was needed to fix a stagflating economy created an almost self-perpetuating growth of the public debt. In the end, the history of the public debt is not so much about accountancy or Treasury securities as about national ambitions, politics, and economic theories.

Annotated Bibliography

Though much has been written about the public debt, very little of it is of any real use in economic analysis or learning the history of the public debt. Most books deal with an ever-pending public debt crisis and give policy recommendations on how to solve the problem. However, there are a few recommendations:

Annual Report of the Secretary of the Treasury on the State of the Finances. Washington, DC: Government Printing Office, -1980.

This is the basic source for all information on the public debt until 1980.

Bayley, Rafael A. The National Loans of the United States from July 4, 1776, to June 30, 1880. Second edition. Facsimile reprint. New York: Burt Franklin, 1970 [1881].

This is the standard work on early United States financing written by a Treasury bureaucrat.

Bureau of the Public Debt. “The Public Debt Online.” URL: http://www.publicdebt.treas. gov/opd/opd.htm.

Provides limited data on the public debt, but provides all past issues of the Monthly Statement of the Public Debt.

Conklin, George T., Jr. “Treasury Financial Policy from the Institutional Point of View.” Journal of Finance 8, no. 2 (May 1953): 226-34.

This is a contemporary’s disapproving view of the growing acceptance of the “new economics” that appeared in the 1930s.

Gordon, John Steele. Hamilton’s Blessing: the Extraordinary Life and Times of Our National Debt. New York: Penguin, 1998.

This is a very readable, brief overview of the history of the public debt.

Love, Robert A. Federal Financing: A Study of the Methods Employed by the Treasury in Its Borrowing Operations. Reprint of 1931 edition. New York: AMS Press, 1968.

This is the most complete and thorough account of the structure of the public debt. Unfortunately, it only goes up to 1925.

Noll, Franklin. A Guide to Government Obligations, 1861-1976. Unpublished ms. 2004.

This is a descriptive inventory and chronological listing of the roughly 12,000 securities issued by the Treasury between 1861 and 1976.

Office of Management and Budget. “Historical Tables.” Budget of the United States Government, Fiscal Year 2005. URL: http://www.whitehouse.gov/omb/budget/fy2005/ pdf/hist.pdf.

Provides data on the public debt, budgets, and federal spending, but reports focus on the latter twentieth century.

Sahr, Robert. “National Government Budget.” URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahr.htm.

This is a valuable web site containing a useful collection of detailed graphs on government spending and the public debt.

Withers, William. The Public Debt. New York: John Day Company, 1945.

Like Conklin, this is a contemporary’s view of the change in perspectives on the public debt occurring in the 1930s. Withers tends to favor the “new economics.”

Citation: Noll, Franklin. “The United States Public Debt, 1861 to 1975″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-united-states-public-debt-1861-to-1975/

An Economic History of Copyright in Europe and the United States

B. Zorina Khan, Bowdoin College

Introduction

Copyright is a form of intellectual property that provides legal protection against unauthorized copying of the producer’s original expression in products such as art, music, books, articles, and software. Economists have paid relatively little scholarly attention to copyrights, although recent debates about piracy and “the digital dilemma” (free use of digital property) have prompted closer attention to theoretical and historical issues. Like other forms of intellectual property, copyright is directed to the protection of cultural creations that are nonrivalrous and nonexclusive in nature. It is generally proposed that, in the absence of private or public forms of exclusion, prices will tend to be driven down to the low or zero marginal costs and the original producer would be unable to recover the initial investment.

Part of the debate about copyright exists because it is still not clear whether state enforcement is necessary to enable owners to gain returns, or whether the producers of copyrightable products respond significantly to financial incentives. Producers of these public goods might still be able to appropriate returns without copyright laws or in the face of widespread infringement, through such strategies as encryption, cartelization, the provision of complementary products, private monitoring and enforcement, market segmentation, network externalities, first mover effects and product differentiation. Patronage, taxation, subsidies, or public provision, might also comprise alternatives to copyright protection. In some instances “authors” (broadly defined) might be more concerned about nonfinancial rewards such as enhanced reputations or more extensive diffusion.

During the past three centuries great controversy has always been associated with the grant of property rights to authors, ranging from the notion that cultural creativity should be rewarded with perpetual rights, through the complete rejection of any intellectual property rights at all for copyrightable commodities. However, historically, the primary emphasis has been on the provision of copyright protection through the formal legal system. Europeans have generally tended to adopt the philosophical position that authorship embodies rights of personhood or moral rights that should be accorded strong protections. The American approach to copyright has been more utilitarian: policies were based on a comparison of costs and benefits, and the primary emphasis of early copyright policies was on the advancement of public welfare. However, the harmonization of international laws has created a melding of these two approaches. The tendency at present is toward stronger enforcement of copyrights, prompted by the lobbying of publishers and the globalization of culture and commerce. Technological change has always exerted an exogenous force for change in copyright laws, and modern innovations in particular provoke questions about the extent to which copyright systems can respond effectively to such challenges.

Copyright in Europe

Copyright in France

In the early years of printing, books and other written matter became part of the public domain when they were published. Like patents, the grant of book privileges originated in the Republic of Venice in the fifteenth century, a practice which was soon prevalent in a number of other European countries. Donatus Bossius, a Milan author, petitioned the duke in 1492 for an exclusive privilege for his book, and successfully argued that he would be unjustly deprived of the benefits from his efforts if others were able to freely copy his work. He was given the privilege for a term of ten years. However, authorship was not required for the grant of a privilege, and printers and publishers obtained monopolies over existing books as well as new works. Since privileges were granted on a case by case basis, they varied in geographical scope, duration, and breadth of coverage, as well as in terms of the attendant penalties for their violation. Grantors included religious orders and authorities, universities, political figures, and the representatives of the Crown.

The French privilege system was introduced in 1498 and was well-developed by the end of the sixteenth century. Privileges were granted under the auspices of the monarch, generally for a brief period of two to three years, although the term could be as much as ten years. Protection was granted to new books or translations, maps, type designs, engravings and artwork. Petitioners paid formal fees and informal gratuities to the officials concerned. Since applications could only be sealed if the King were present, petitions had to be carefully timed to take advantage of his route or his return from trips and campaigns. It became somewhat more convenient when the courts of appeal such as the Parlement de Paris began to issue grants that were privileges in all but name, although this could lead to conflicting rights if another authority had already allocated the monopoly elsewhere. The courts sometimes imposed limits on the rights conferred, in the form of stipulations about the prices that could be charged. Privileges were property that could be assigned or licensed to another party, and their infringement was punished by a fine and at times confiscation of all the output of “pirates.”

After 1566, the Edict of Moulins required that all new books had to be approved and licensed by the Crown. Favored parties were able to get renewals of their monopolies that also allowed them to lay claim to works that were already in the public domain. By the late eighteenth century an extensive administrative procedure was in place that was designed to restrict the number of presses and engage in surveillance and censorship of the publishing industry. Manuscripts first had to be read by a censor, and only after a permit was requested and granted could the book be printed, although the permit could later be revoked if complaints were lodged by sufficiently influential individuals. Decrees in 1777 established that authors who did not alienate their property were entitled to exclusive rights in perpetuity. Since few authors had the will or resources to publish and distribute books, their privileges were likely to be sold outright to professional publishers. However, the law made a distinction in the rights accorded to publishers, because if the right was sold the privilege was only accorded a limited duration of at least ten years, the exact term to be determined in accordance with the value of the work, and once the publisher’s term expired, the work passed into the public domain. The fee for a privilege was thirty six livres. Approvals to print a work, or a “permission simple” which did not entail exclusive rights could also be obtained after payment of a substantial fee. Between 1700 and 1789, a total of 2,586 petitions for exclusive privileges were filed, and about two thirds were granted. The result was a system that resulted in “odious monopolies,” higher prices and greater scarcity, large transfers to officials of the Crown and their allies, and pervasive censorship. It likewise disadvantaged smaller book producers, provincial publishers, and the academic and broader community.

The French Revolutionary decrees of 1791 and 1793 replaced the idea of privilege with that of uniform statutory claims to literary property, based on the principle that “the most sacred, the most unassailable and the most personal of possessions is the fruit of a writer’s thought.” The subject matter of copyrights covered books, dramatic productions and the output of the “beaux arts” including designs and sculpture. Authors were required to deposit two copies of their books with the Bibliothèque Nationale or risk losing their copyright. Some observers felt that copyrights in France were the least protected of all property rights, since they were enforced with a care to protecting the public domain and social welfare. Although France is associated with the author’s rights approach to copyright and proclamations of the “droit d’auteur,” these ideas evolved slowly and hesitatingly, mainly in order to meet the self-interest of the various members of the book trade. During the ancien régime, the rhetoric of authors’ rights had been promoted by French owners of book privileges as a way of deflecting criticism of monopoly grants and of protecting their profits, and by their critics as a means of attacking the same monopolies and profits. This language was retained in the statutes after the Revolution, so the changes in interpretation and enforcement may not have been universally evident.

By the middle of the nineteenth century, French jurisprudence and philosophy tended to explicate copyrights in terms of rights of personality but the idea of the moral claim of authors to property rights was not incorporated in the law until early in the twentieth century. The droit d’auteur first appeared in a law of April 1910. In 1920 visual artists were granted a “droit de suite” or a claim to a portion of the revenues from resale of their works. Subsequent evolution of French copyright laws led to the recognition of the right of disclosure, the right of retraction, the right of attribution, and the right of integrity. These moral rights are (at least in theory) perpetual, inalienable, and thus can be bequeathed to the heirs of the author or artist, regardless of whether or not the work was sold to someone else. The self-interested rhetoric of the owners of monopoly privileges now fully emerged as the keystone of the “French system of literary property” that would shape international copyright laws in the twenty first century.

Copyright in England

England similarly experienced a period during which privileges were granted, such as a seven year grant from the Chancellor of Oxford University for an 1518 work. In 1557, the Worshipful Company of Stationers, a publishers’ guild, was founded on the authority of a royal charter and controlled the book trade for next one hundred and fifty years. This company created and controlled the right of their constituent members to make copies, so in effect their “copy right” was a private property right that existed in perpetuity, independently of state or statutory rights. Enforcement and regulation were carried out by the corporation itself through its Court of Assistants. The Stationers’ Company maintained a register of books, issued licenses, and sanctioned individuals who violated their regulations. Thus, in both England and France, copyright law began as a monopoly grant to benefit and regulate the printers’ guilds, and as a form of surveillance and censorship over public opinion on behalf of the Crown.

The English system of privileges was replaced in 1710 by a copyright statute (the “Statute of Anne” or “An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or Purchasers of Such Copies, During the Times Therein Mentioned,” 1709-10, 8 Anne, ch. 19.) The statute was not directed toward the authors of books and their rights. Rather, its intent was to restrain the publishing industry and destroy its monopoly power. According to the law, the grant of copyright was available to anyone, not just to the Stationers. Instead of a perpetual right, the term was limited to fourteen years, with a right of renewal, after which the work would enter the public domain. The statute also permitted the importation of books in foreign languages.

Subsequent litigation and judicial interpretation added a new and fundamentally different dimension to copyright. In order to protect their perpetual copyright, publishers tried to promote the idea that copyright was based on the natural rights of authors or creative individuals and, as the agent of the author, those rights devolved to the publisher. If indeed copyrights derived from these inherent principles, they represented property that existed independently of statutory provisions and could be protected under common law. The booksellers engaged in a series of strategic litigation that culminated in their defeat in the landmark case, Donaldson v. Beckett [98 Eng. Rep. 257 (1774)]. The court ruled that authors had a common law right in their unpublished works, but on publication that right was extinguished by the statute, whose provisions determined the nature and scope of any copyright claims. This transition from publisher’s rights to statutory author’s rights implied that copyright had transmuted from a straightforward license to protect monopoly profits into an expanding property right whose boundaries would henceforth increase at the expense of the public domain.

Between 1735 and 1875 fourteen Acts of Parliament amended the copyright legislation. Copyrights extended to sheet music, maps, charts, books, sculptures, paintings, photographs, dramatic works and songs sung in a dramatic fashion, and lectures outside of educational institutions. Copyright owners had no remedies at law unless they complied with a number of stipulations which included registration, the payment of fees, the delivery of free copies of every edition to the British Museum (delinquents were fined), as well as complimentary copies for four libraries, including the Bodleian and Trinity College. The ubiquitous Stationers’ Company administered registration, and the registrar personally benefited from the monetary fees of 5 shillings when the book was registered and an equal amount for each assignment and each copy of an entry, along with one shilling for each entry searched. Foreigners could only obtain copyrights if they presented themselves in a part of the British Empire at the time of publication. The book had to be published in the United Kingdom, and prior publication in a foreign country – even in a British colony – was an obstacle to copyright protection.

The term of the copyright in books was for the longer of 42 years from publication or the lifetime of the author plus seven years, and after the death of the author a compulsory license could be issued to ensure that works of sufficient public benefit would be published. The “work for hire” doctrine was in force for books, reviews, newspapers, magazines and essays unless a distinct contractual clause specified that the copyright was to accrue to the author. Similarly, unauthorized use of a publication was permitted for the purposes of “fair use.” Only the copyright holder and his agents were allowed to import the protected works into Britain.

The British Commission that reported on the state of the copyright system in 1878 felt that the laws were “obscure, arbitrary and piecemeal” and were compounded by the confused state of the common law. The numerous uncoordinated laws that were simultaneously in force led to conflicts and unintended defects in the system. The report discussed but did not recommend an alternative to the grant of copyrights, in the form of a royalty system where “any person would be entitled to copy or republish the work on paying or securing to the owner a remuneration, taking the form of royalty or definite sum prescribed by law.” The main benefit would be to be public in the form of early access to cheap editions, whereas the main cost would be to the publishers whose risk and return would be negatively affected.

The Commission noted that the implications for the colonies were “anomalous and unsatisfactory.” The publishers in England practiced price discrimination, modifying the initial high prices for copyrighted material through discounts given to reading clubs, circulating libraries and the like, benefits which were not available in the colonies. In 1846 the Colonial Office acknowledged “the injurious effects produced upon our more distant colonists” and passed the Foreign Reprints Act in the following year. This allowed the colonies who adopted the terms of British copyright legislation to import cheap reprints of British copyrighted material with a tariff of 12.5 percent, the proceeds of which were to be remitted to the copyright owners. However, enforcement of the tariff seems to have been less than vigorous since, between 1866 and 1876 only £1155 was received from the 19 colonies who took advantage of the legislation (£1084 from Canada which benefited significantly from the American reprint trade). The Canadians argued that it was difficult to monitor imports, so it would be more effective to allow them to publish the reprints themselves and collect taxes for the benefit of the copyright owners. This proposal was rejected, but under the Canadian Copyright Act of 1875 British copyright owners could obtain Canadian copyrights for Canadian editions that were sold at much lower prices than in Britain or even in the United States.

The Commission made two recommendations. First, the bigger colonies with domestic publishing facilities should be allowed to reprint copyrighted material on payment of a license to be set by law. Second, the benefits to the smaller colonies of access to British literature should take precedence over lobbies to repeal the Foreign Reprints Act, which should be better enforced rather than removed entirely. Some had argued that the public interest required that Britain should allow the importation of cheap colonial reprints since the high prices of books were “altogether prohibitory to the great mass of the reading public” but the Commission felt that this should only be adopted with the consent of the copyright owner. They also devoted a great deal of attention to what was termed “The American Question” but took the “highest public ground” and recommended against retaliatory policies.

Copyright in the United States

Colonial Copyright

In the period before the Declaration of Independence individual American states recognized and promoted patenting activity, but copyright protection was not considered to be of equal importance, for a number of reasons. First, in a democracy the claims of the public and the wish to foster freedom of expression were paramount. Second, to a new colony, pragmatic concerns were likely of greater importance than the arts, and the more substantial literary works were imported. Markets were sufficiently narrow that an individual could saturate the market with a first run printing, and most local publishers produced ephemera such as newspapers, almanacs, and bills. Third, it was unclear that copyright protection was needed as an incentive for creativity, especially since a significant fraction of output was devoted to works such as medical treatises and religious tracts whose authors wished simply to maximize the number of readers, rather than the amount of income they received.

In 1783, Connecticut became the first state to approve an “Act for the encouragement of literature and genius” because “it is perfectly agreeable to the principles of natural equity and justice, that every author should be secured in receiving the profits that may arise from the sale of his works, and such security may encourage men of learning and genius to publish their writings; which may do honor to their country, and service to mankind.” Although this preamble might seem to strongly favor author’s rights, the statute also specified that books were to be offered at reasonable prices and in sufficient quantities, or else a compulsory license would issue.

Federal Copyright Grants

Despite their common source in the intellectual property clause of the U.S. Constitution, copyright policies provided a marked contrast to the patent system. According to Wheaton v. Peters, 33 U.S. 591, 684 (1834): “It has been argued at the bar, that as the promotion of the progress of science and the useful arts is here united in the same clause in the constitution, the rights of the authors and inventors were considered as standing on the same footing; but this, I think, is a non sequitur, for when congress came to execute this power by legislation, the subjects are kept distinct, and very different provisions are made respecting them.”

The earliest federal statute to protect the product of authors was approved on May 31 1790, “for the encouragement of learning, by securing the copies of maps, charts, and books to the authors and proprietors of such copies, during the times therein mentioned.” John Barry obtained the first federal copyright when he registered his spelling book in the District Court of Pennsylvania, and early grants reflected the same utilitarian character. Policy makers felt that copyright protection would serve to increase the flow of learning and information, and by encouraging publication would contribute to democratic principles of free speech. The diffusion of knowledge would also ensure broad-based access to the benefits of social and economic development. The copyright act required authors and proprietors to deposit a copy of the title of their work in the office of the district court in the area where they lived, for a nominal fee of sixty cents. Registration secured the right to print, publish and sell maps, charts and books for a term of fourteen years, with the possibility of an extension for another like term. Amendments to the original act extended protection to other works including musical compositions, plays and performances, engravings and photographs. Legislators refused to grant perpetual terms, but the length of protection was extended in the general revision of the laws in 1831, and 1909.

In the case of patents, the rights of inventors, whether domestic or foreign, were widely viewed as coincident with public welfare. In stark contrast, policymakers showed from the very beginning an acute sensitivity to trade-offs between the rights of authors (or publishers) and social welfare. The protections provided to authors under copyrights were as a result much more limited than those provided by the laws based on moral rights that were applied in many European countries. Of relevance here are stipulations regarding first sale, work for hire, and fair use. Under a moral rights-based system, an artist or his heirs can claim remedies if subsequent owners alter or distort the work in a way that allegedly injures the artist’s honor or reputation. According to the first sale doctrine, the copyright holder lost all rights after the work was sold. In the American system, if the copyright holder’s welfare were enhanced by nonmonetary concerns, these individualized concerns could be addressed and enforced through contract law, rather than through a generic federal statutory clause that would affect all property holders. Similarly, “work for hire” doctrines also repudiated the right of personality, in favor of facilitating market transactions. For example, in 1895 Thomas Donaldson filed a complaint that Carroll D. Wright’s editing of Donaldson’s report for the Census Bureau was “damaging and injurious to the plaintiff, and to his reputation” as a scholar. The court rejected his claim and ruled that as a paid employee he had no rights in the bulletin; to rule otherwise would create problems in situations where employees were hired to prepare data and statistics.

This difficult quest for balance between private and public good was most evident in the copyright doctrine of “fair use” that (unlike with patents) allowed unauthorized access to copyrighted works under certain conditions. Joseph Story ruled in [Folsom v. Marsh, 9 F. Cas. 342 (1841)]: “we must often, in deciding questions of this sort, look to the nature and objects of the selections made, the quantity and value of the materials used, and the degree in which the use may prejudice the sale, or diminish the profits, or supersede the objects, of the original work.” One of the striking features of the fair use doctrine is the extent to which property rights were defined in terms of market valuations, or the impact on sales and profits, as opposed to a clear holding of the exclusivity of property. Fair use doctrine thus illustrates the extent to which the early policy makers weighed the costs and benefits of private property rights against the rights of the public and the provisions for a democratic society. If copyrights were as strictly construed as patents, it would serve to reduce scholarship, prohibit public access for noncommercial purposes, increase transactions costs for potential users, and inhibit learning which the statutes were meant to promote.

Nevertheless, like other forms of intellectual property, the copyright system evolved to encompass improvements in technology and changes in the marketplace. Technological changes in nineteenth-century printing included the use of stereotyping which lowered the costs of reprints, improvements in paper making machinery, and the advent of steam powered printing presses. Graphic design also benefited from innovations, most notably the development of lithography and photography. The number of new products also expanded significantly, encompassing recorded music and moving pictures by the end of the nineteenth century; and commercial television, video recordings, audiotapes, and digital music in the twentieth century.

The subject matter, scope and duration of copyrights expanded over the course of the nineteenth century to include musical compositions, plays, engravings, sculpture, and photographs. By 1910 the original copyright holder was granted derivative rights such as to translations of literary works into other languages; to performances; and the rights to adapt musical works, among others. Congress also lengthened the term of copyright several times, although by 1890 the term of copyright protection in Greece and the United States were the most abbreviated in the world. New technologies stimulated change by creating new subjects for copyright protection, and by lowering the costs of infringement of copyrighted works. In Edison v. Lubin, 122 F. Cas. 240 (1903), the lower court rejected Edison’s copyright of moving pictures under the statutory category of photographs. This decision was overturned by the appellate court: “[Congress] must have recognized there would be change and advance in making photographs, just as there has been in making books, printing chromos, and other subjects of copyright protection.” Copyright enforcement was largely the concern of commercial interests, and not of the creative individual. The fraction of copyright plaintiffs who were authors (broadly defined) was initially quite low, and fell continuously during the nineteenth century. By 1900-1909, only 8.6 percent of all plaintiffs in copyright cases were the creators of the item that was the subject of the litigation. Instead, by the same period, the majority of parties bringing cases were publishers and other assignees of copyrights.

In 1909 Congress revised the copyright law and composers were given the right to make the first mechanical reproductions of their music. However, after the first recording, the statute permitted a compulsory license to issue for copyrighted musical compositions: that is to say, anyone could subsequently make their own recording of the composition on payment of a fee that was set by the statute at two cents per recording. In effect, the property right was transformed into a liability rule. The next major legislative change in 1976 similarly allowed compulsory licenses to issue for works that are broadcast on cable television. The prevalence of compulsory licenses for copyrighted material is worth noting for a number of reasons: they underline some of the statutory differences between patents and copyrights in the United States; they reflect economic reasons for such distinctions; and they are also the result of political compromises among the various interest groups that are affected.

Allied Rights

The debate about the scope of patents and copyrights often underestimates or ignores the importance of allied rights that are available through other forms of the law such as contract and unfair competition. A noticeable feature of the case law is the willingness of the judiciary in the nineteenth century to extend protection to noncopyrighted works under alternative doctrines in the common law. More than 10 percent of copyright cases dealt with issues of unfair competition, and 7.7 percent with contracts; a further 12 percent encompassed issues of right to privacy, trade secrets, and misappropriation. For instance, in Keene v. Wheatley et al., 14 F. Cas. 180 (1860), the plaintiff did not have a statutory copyright in the play that was infringed. However, she was awarded damages on the basis of her proprietary common law right in an unpublished work, and because the defendants had taken advantage of a breach of confidence by one of her former employees. Similarly, the courts offered protection against misappropriation of information, such as occurred when the defendants in Chamber of Commerce of Minneapolis v. Wells et al., 111 N.W. 157 (1907) surreptitiously obtained stock market information by peering in windows, eavesdropping, and spying.

Several other examples relate to the more traditional copyright subject of the book trade. E. P. Dutton & Company published a series of Christmas books which another publisher photographed, and offered as a series with similar appearance and style but at lower prices. Dutton claimed to have been injured by a loss of profits and a loss of reputation as a maker of fine books. The firm did not have copyrights in the series, but they essentially claimed a right in the “look and feel” of the books. The court agreed: “the decisive fact is that the defendants are unfairly and fraudulently attempting to trade upon the reputation which plaintiff has built up for its books. The right to injunctive relief in such a case is too firmly established to require the citation of authorities.” In a case that will resonate with academics, a surgery professor at the University of Pennsylvania was held to have a common law property right in the lectures he presented, and a student could not publish them without his permission. Titles could not be copyrighted, but were protected as trade marks and under unfair competition doctrines. In this way, in numerous lawsuits G. C. Merriam & Co, the original publishers of Webster’s Dictionary, restrained the actions of competitors who published the dictionary once the copyrights had expired.

International Copyrights in the United States

The U.S. was long a net importer of literary and artistic works, especially from England, which implied that recognition of foreign copyrights would have led to a net deficit in international royalty payments. The Copyright Act recognized this when it specified that “nothing in this act shall be construed to extend to prohibit the importation or vending, reprinting or publishing within the United States, of any map, chart, book or books … by any person not a citizen of the United States.” Thus, the statutes explicitly authorized Americans to take free advantage of the cultural output of other countries. As a result, it was alleged that American publishers “indiscriminately reprinted books by foreign authors without even the pretence of acknowledgement.” The tendency to reprint foreign works was encouraged by the existence of tariffs on imported books that ranged as high as 25 percent.

The United States stood out in contrast to countries such as France, where Louis Napoleon’s Decree of 1852 prohibited counterfeiting of both foreign and domestic works. Other countries which were affected by American piracy retaliated by refusing to recognize American copyrights. Despite the lobbying of numerous authors and celebrities on both sides of the Atlantic, the American copyright statutes did not allow for copyright protection of foreign works for fully one century. As a result, American publishers and producers freely pirated foreign literature, art, and drama.

Effects of Copyright Piracy

What were the effects of piracy? First, did the American industry suffer from cheaper foreign books being dumped on the domestic market? This does not seem to have been the case. After controlling for the type of work, the cost of the work, and other variables, the prices of American books were lower than prices of foreign books. American book prices may have been lower to reflect lower perceived quality or other factors that caused imperfect substitutability between foreign and local products. As might be expected, prices were not exogenously and arbitrarily fixed, but varied in accordance with a publisher’s estimation of market factors such as the degree of competition and the responsiveness of demand to determinants. The reading public appears to have gained from the lack of copyright, which increased access to the superior products of more developed markets in Europe, and in the long run this likely improved both the demand and supply of domestic science and literature.

Second, according to observers, professional authorship in the United States was discouraged because it was difficult to compete with established authors such as Scott, Dickens and Tennyson. Whether native authors were deterred by foreign competition would depend on the extent to which foreign works prevailed in the American market. Early in American history the majority of books were reprints of foreign titles. However, nonfiction titles written by foreigners were less likely to be substitutable for nonfiction written by Americans; consequently, the supply of nonfiction soon tended to be provided by native authors. From an early period grammars, readers, and juvenile texts were also written by Americans. Geology, geography, history and similar works would have to be adapted or completely rewritten to be appropriate for an American market which reduced their attractiveness as reprints. Thus, publishers of schoolbooks, medical volumes and other nonfiction did not feel that the reforms of 1891 were relevant to their undertakings. Academic and religious books are less likely to be written for monetary returns, and their authors probably benefited from the wider circulation that lack of international copyright encouraged. However, the writers of these works declined in importance relative to writers of fiction, a category which grew from 6.4 percent before 1830 to 26.4 percent by the 1870s.

On the other hand, foreign authors dominated the field of fiction for much of the century. One study estimates about fifty percent of all fiction best sellers in antebellum period were pirated from foreign works. In 1895 American authors accounted for two of the top ten best sellers but by 1910 nine of the top ten were written by Americans. This fall over time in the fraction of foreign authorship may have been due to a natural evolutionary process, as the development of the market for domestic literature over time encouraged specialization. The growth in fiction authors was associated with the increase in the number of books per author over the same period. Improvements in transportation and the increase in the academic population probably played a large role in enabling individuals who lived outside the major publishing centers to become writers despite the distance. As the market expanded, a larger fraction of writers could become professionals.

Although the lack of copyright protection may not have discouraged authors, this does not imply that intellectual property policy in this dimension had no costs. It is likely that the lack of foreign copyrights led to some misallocation of efforts or resources, such as in attempting to circumvent the rules. Authors changed their residence temporarily when books were about to be published in order to qualify for copyright. Others obtained copyrights by arranging to co-author with a foreign citizen. T. H. Huxley adopted this strategy, arranging to co-author with “a young Yankee friend … Otherwise the thing would be pillaged at once.” An American publisher suggested that Kipling should find “a hack writer, whose name would be of use simply on account of its carrying the copyright.” Harriet Beecher Stowe proposed a partnership with Elizabeth Gaskell, so they could “secure copyright mutually in our respective countries and divide the profits.”

It is widely acknowledged that copyrights in books tended to be the concern of publishers rather than of authors (although the two are naturally not independent of each other). As a result of lack of legal copyrights in foreign works, publishers raced to be first on the market with the “new” pirated books, and the industry experienced several decades of intense, if not quite “ruinous” competition. These were problems that publishers in England had faced before, in the market for books that were uncopyrighted, such as Shakespeare and Fielding. Their solution was to collude in the form of strictly regulated cartels or “printing congers.” The congers created divisible property in books that they traded, such as a one hundred and sixtieth share in Johnson’s Dictionary that was sold for £23 in 1805. Cooperation resulted in risk sharing and a greater ability to cover expenses. The unstable races in the United States similarly settled down during the 1840s to collusive standards that were termed “trade custom” or “courtesy of the trade.”

The industry achieved relative stability because the dominant firms cooperated in establishing synthetic property rights in foreign-authored books. American publishers made payments (termed “copyrights”) to foreign authors to secure early sheets, and other firms recognized their exclusive property in the “authorized reprint”. Advance payments to foreign authors not only served to ensure the coincidence of publishers’ and authors’ interests – they were also recognized by “reputable” publishers as “copyrights.” These exclusive rights were tradable, and enforced by threats of predatory pricing and retaliation. Such practices suggest that publishers were able to simulate the legal grant through private means.

However, private rights naturally did not confer property rights that could be enforced at law. The case of Sheldon v. Houghton 21 F. Cas 1239 (1865) illustrates that these rights were considered to be “very valuable, and is often made the subject of contracts, sales, and transfers, among booksellers and publishers.” The very fact that a firm would file a plea for the court to protect their claim indicates how vested a right it had become. The plaintiff argued that “such custom is a reasonable one, and tends to prevent injurious competition in business, and to the investment of capital in publishing enterprises that are of advantage to the reading public.” The courts rejected this claim, since synthetic rights differed from copyrights in the degree of security that was offered by the enforcement power of the courts. Nevertheless, these title-specific of rights exclusion decreased uncertainty, enabled publishers to recoup their fixed costs, and avoided the wasteful duplication of resources that would otherwise have occurred.

It was not until 1891 that the Chace Act granted copyright protection to selected foreign residents. Thus, after a century of lobbying by interested parties on both sides of the Atlantic, based on reasons that ranged from the economic to the moral, copyright laws only changed when the United States became more competitive in the international market for literary and artistic works. However, the act also included significant concessions to printers’ unions and printing establishments in the form of “manufacturing clauses.” First, a book had to be published in the U.S. before or at the same time as the publication date in its country of origin. Second, the work had to be printed here, or printed from type set in the United States or from plates made from type set in the United States. Copyright protection still depended on conformity with stipulations such as formal registration of the work. These clauses resulted in U.S. failure to qualify for admission to the international Berne Convention until 1988, more than one hundred years after the first Convention.

After the copyright reforms in 1891, both English and American authors were disappointed to find that the change in the law did not lead to significant gains. Foreign authors realized they may even have benefited from the lack of copyright protection in the United States. Despite the cartelization of publishing, competition for these synthetic copyrights ensured that foreign authors were able to obtain payments that American firms made to secure the right to be first on the market. It can also be argued that foreign authors were able to reap higher total returns from the expansion of the market through piracy. The lack of copyright protection may have functioned as a form of price discrimination, where the product was sold at a higher price in the developed country, and at a lower or zero price in the poorer country. Returns under such circumstances may have been higher for goods with demand externalities or network effects, such as “bestsellers” where consumer valuation of the book increased with the size of the market. For example, Charles Dickens, Anthony Trollope, and other foreign writers were able to gain considerable income from complementary lecture tours in the extensive United States market.

Harmonization of Copyright Laws

In view of the strong protection accorded to inventors under the U.S. patent system, to foreign observers its copyright policies appeared to be all the more reprehensible. The United States, the most liberal in its policies towards patentees, had led the movement for harmonization of patent laws. In marked contrast, throughout the history of the U.S. system, its copyright grants in general were more abridged than almost all other countries in the world. The term of copyright grants to American citizens was among the shortest in the world, the country applied the broadest interpretation of fair use doctrines, and the validity of the copyright depended on strict compliance with the requirements. U.S. failure to recognize the rights of foreign authors was also unique among the major industrial nations. Throughout the nineteenth century proposals to reform the law and to acknowledge foreign copyrights were repeatedly brought before Congress and rejected. Even the bill that finally recognized international copyrights almost failed, only passed at the last possible moment, and required longstanding exemptions in favor of workers and printing enterprises.

In a parallel fashion to the status of the United States in patent matters, France’s influence was evident in the subsequent evolution of international copyright laws. Other countries had long recognized the rights of foreign authors in national laws and bilateral treaties, but France stood out in its favorable treatment of domestic and foreign copyrights as “the foremost of all nations in the protection it accords to literary property.” This was especially true of its concessions to foreign authors and artists. For instance, France allowed copyrights to foreigners conditioned on manufacturing clauses in 1810, and granted foreign and domestic authors equal rights in 1852. In the following decade France entered into almost two dozen bilateral treaties, prompting a movement towards multilateral negotiations, such as the Congress on Literary and Artistic Property in 1858. The International Literary and Artistic Association, which the French novelist Victor Hugo helped to establish, conceived of and organized the Convention which first met in Berne in 1883.

The Berne Convention included a number of countries that wished to establish an “International Union for the Protection of Literary and Artistic Works.” The preamble declared their intent to “protect effectively, and in as uniform a manner as possible, the rights of authors over their literary and artistic works.” The actual Articles were more modest in scope, requiring national treatment of authors belonging to the Union and minimum protection for translation and public performance rights. The Convention authorized the establishment of a physical office in Switzerland, whose official language would be French. The rules were revised in 1908 to extend the duration of copyright and to include modern technologies. Perhaps the most significant aspect of the convention was not its specific provisions, but the underlying property rights philosophy which was decidedly from the natural rights school. Berne abolished compliance with formalities as a prerequisite for copyright protection since the creative act itself was regarded as the source of the property right. This measure had far-reaching consequences, because it implied that copyright was now the default, whereas additions to the public domain would have to be achieved through affirmative actions and by means of specific limited exemptions. In 1928 the Berne Convention followed the French precedent and acknowledged the moral rights of authors and artists.

Unlike its leadership in patent conventions, the United States declined an invitation to the pivotal copyright conference in Berne in 1883; it attended but refused to sign the 1886 agreement of the Berne Convention. Instead, the United States pursued international copyright policies in the context of the weaker Universal Copyright Convention (UCC), which was adopted in 1952 and formalized in 1955 as a complementary agreement to the Berne Convention. The UCC membership included many developing countries that did not wish to comply with the Berne Convention because they viewed its provisions as overly favorable to the developed world. The United States was among the last wave of entrants into the Berne Convention when it finally joined in 1988. In order to do so it complied by removing prerequisites for copyright protection such as registration, and also lengthened the term of copyrights. However, it still has not introduced federal legislation in accordance with Article 6bis, which declares the moral rights of authors “independently of the author’s economic rights, and even after the transfer of the said rights.” Similarly, individual countries continue to differ in the extent to which multilateral provisions governed domestic legislation and practices.

The quest for harmonization of intellectual property laws resulted in a “race to the top,” directed by the efforts and self interest of the countries which had the strongest property rights. The movement to harmonize patents was driven by American efforts to ensure that its extraordinary patenting activity was remunerated beyond as well as within its borders. At the same time, the United States ignored international conventions to unify copyright legislation. Nevertheless, the harmonization of copyright laws proceeded, promoted by France and other civil law regimes which urged stronger protection for authors based on their “natural rights” although at the same time they infringed on the rights of foreign inventors. The net result was that international pressure was applied to developing countries in the twentieth century to establish strong patents and strong copyrights, although no individual developed country had adhered to both concepts simultaneously during their own early growth phase. This occurred even though theoretical models did not offer persuasive support for intellectual property harmonization, and indeed suggested that uniform policies might be detrimental even to some developed countries and to overall global welfare.

Conclusion

The past three centuries stand out in terms of the diversity across nations in intellectual property institutions, but the nineteenth century saw the origins of the movement towards the “harmonization” of laws that at present dominates global debates. Among the now-developed countries, the United States stood out for its conviction that broad access to intellectual property rules and standards was key to achieving economic development. Europeans were less concerned about enhancing mass literacy and public education, and viewed copyright owners as inherently meritorious and deserving of strong protection. European copyright regimes thus evolved in the direction of author’s rights, while the United States lagged behind the rest of the world in terms of both domestic and foreign copyright protection.

By design, American statutes differentiated between patents and copyrights in ways that seemed warranted if the objective was to increase social welfare. The patent system early on discriminated between nonresident and domestic inventors, but within a few decades changed to protect the right of any inventor who filed for an American patent regardless of nationality. The copyright statutes, in contrast, openly encouraged piracy of foreign goods on an astonishing scale for one hundred years, in defiance of the recriminations and pressures exerted by other countries. The American patent system required an initial search and examination that ensured the patentee was the “first and true” creator of the invention in the world, whereas copyrights were granted through mere registration. Patents were based on the assumption of novelty and held invalid if this assumption was violated, whereas essentially similar but independent creation was copyrightable. Copyright holders were granted the right to derivative works, whereas the patent holder was not. Unauthorized use of patented inventions was prohibited, whereas “fair use” of copyrighted material was permissible if certain conditions were met. Patented inventions involved greater initial investments, effort, and novelty than copyrighted products and tended to be more responsive to material incentives; whereas in many cases cultural goods would still be produced or only slightly reduced in the absence of such incentives. Fair use was not allowed in the case of patents because the disincentive effect was likely to be higher, while the costs of negotiation between the patentee and the more narrow market of potential users would generally be lower. If copyrights were as strongly enforced as patents it would benefit publishers and a small literary elite at the cost of social investments in learning and education.

The United States created a utilitarian market-based model of intellectual property grants which created incentives for invention, but always with the primary objective of increasing social welfare and protecting the public domain. The checks and balances of interest group lobbies, the legislature and the judiciary worked effectively as long as each institution was relatively well-matched in terms of size and influence. However, a number of legal and economic scholars are increasingly concerned that the political influence of corporate interests, the vast number of uncoordinated users over whom the social costs are spread, and international harmonization of laws have upset these counterchecks, leading to over-enforcement at both the private and public levels.

International harmonization with European doctrines introduced significant distortions in the fundamental principles of American copyright and its democratic provisions. One of the most significant of these changes was also one of the least debated: compliance with the precepts of the Berne Convention accorded automatic copyright protection to all creations on their fixation in tangible form. This rule reversed the relationship between copyright and the public domain that the U.S. Constitution stipulated. According to original U.S. copyright doctrines, the public domain was the default, and copyright merely comprised a limited exemption to the public domain; after the alignment with Berne, copyright became the default, and the rights of the public and of the public domain now merely comprise a limited exception to the primacy of copyright. The pervasive uncertainty that characterizes the intellectual property arena today leads risk-averse individuals and educational institutions to err on the side of abandoning their right to free access rather than invite potential challenges and costly litigation. A number of commentators are equally concerned about other dimensions of the globalization of intellectual property rights, such as the movement to emulate European grants of property rights in databases, which has the potential to inhibit diffusion and learning.

Copyright law and policy has always altered and been altered by social, economic and technological changes, in the United States and elsewhere. However, the one constant feature across the centuries is that copyright protection involves crucial political questions to a far greater extent than its economic implications.

Additional Readings

Economic History

B. Zorina Khan. The Democratization of Invention: Patents and Copyrights in American Economic Development, 1790-1920. New York: Cambridge University Press, 2005.

Law and Economics

Besen, Stanley, and L. Raskind. “An Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5 (1991): 3-27.

Breyer, Stephen. “The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies and Computer Programs.” Harvard Law Review 84 (1970): 281-351.

Gallini, Nancy and S. Scotchmer. “Intellectual Property: When Is It the Best Incentive System?” Innovation Policy and the Economy 2 (2002): 51-78.

Gordon, Wendy, and R. Watt, editors. The Economics of Copyright: Developments in Research and Analysis. Cheltenham, UK: Edward Elgar, 2002.

Hurt, Robert M., and Robert M. Shuchman. “The Economic Rationale of Copyright.” American Economic Review Papers and Proceedings 56 (1966): 421-32.

Johnson, William R. “The Economics of Copying.” Journal of Political Economy 93 (1985): 1581-74.

Landes, William M., and Richard A. Posner. “An Economic Analysis of Copyright Law.” Journal of Legal Studies 18 (1989): 325-63.

Landes, William M., and Richard A. Posner. The Economic Structure of Intellectual Property Law. Cambridge, MA: Harvard University Press, 2003.

Liebowitz, S. J. “Copying and Indirect Appropriability: Photocopying of Journals.” Journal of Political Economy 93 (1985): 945-57.

Merges, Robert P. “Contracting into Liability Rules: Intellectual Property Rights and Collective Rights Organizations.” California Law Review 84, no. 5 (1996): 1293-1393.

Meurer, Michael J. “Copyright Law and Price Discrimination.” Cardozo Law Review 23 (2001): 55-148.

Novos, Ian E., and Michael Waldman. “The Effects of Increased Copyright Protection: An Analytic Approach.” Journal of Political Economy 92 (1984): 236-46.

Plant, Arnold. “The Economic Aspects of Copyright in Books.” Economica 1 (1934): 167-95.

Takeyama, L. “The Welfare Implications of Unauthorized Reproduction of Intellectual Property in the Presence of Demand Network Externalities.” Journal of Industrial Economics 42 (1994): 155–66.

Takeyama, L. “The Intertemporal Consequences of Unauthorized Reproduction of Intellectual Property.” Journal of Law and Economics 40 (1997): 511–22.

Varian, Hal. “Buying, Sharing and Renting Information Goods.” Journal of Industrial Economics 48, no. 4 (2000): 473–88.

Varian, Hal. “Copying and Copyright.” Journal of Economic Perspectives 19, no. 2 (2005): 121-38.

Watt, Richard. Copyright and Economic Theory: Friends or Foes? Cheltenham, UK: Edward Elgar, 2000.

History of Economic Thought

Hadfield, Gilliam K. “The Economics of Copyright: A Historical Perspective.” Copyright Law Symposium (ASCAP) 38 (1992): 1-46.

History

Armstrong, Elizabeth. Before Copyright: The French Book-Privilege System, 1498-1526. Cambridge: Cambridge University Press, 1990.

Birn, Raymond. “The Profits of Ideas: Privileges en librairie in Eighteenth-century France.” Eighteenth-Century Studies 4, no. 2 (1970-71): 131-68.

Bugbee, Bruce. The Genesis of American Patent and Copyright Law. Washington, DC: Public Affairs Press, 1967.

Dawson, Robert L. The French Booktrade and the “Permission Simple” of 1777: Copyright and the Public Domain. Oxford: Voltaire Foundation, 1992.

Hackett, Alice P., and James Henry Burke. Eighty Years of Best Sellers, 1895-1975. New York: Bowker, 1977.

Nowell-Smith, Simon. International Copyright Law and the Publisher in the Reign of Queen Victoria. Oxford: Clarendon Press, 1968.

Patterson, Lyman. Copyright in Historical Perspective. Nashville: Vanderbilt University Press, 1968.

Rose, Mark. Authors and Owners: The Invention of Copyright. Cambridge: Harvard University Press, 1993.

Saunders, David. Authorship and Copyright. London: Routledge, 1992.

Citation: Khan, B. “An Economic History of Copyright in Europe and the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-copyright-in-europe-and-the-united-states/

Historical Political Business Cycles in the United States

Jac C. Heckelman, Wake Forest University

Macroeconomic Performance and Elections

Analyzing American presidential elections as far back as 1916, Ray Fair (1978) has shown that macroeconomic conditions consistently affect party vote shares. Specifically, the incumbent party is predicted to improve its vote share when economic growth is high and inflation is low. Using no information other than the growth rate, inflation rate, time trend, and the identity of the incumbent party, Fair was able to correctly predict the winning party for 15 of the 16 presidential elections from 1916-1978.

Given a strong connection between the economic environment and vote shares, incumbent politicians have an incentive to manipulate the economy as elections draw near. The notion that incumbents will alter the economic environment for their short-term political gain at the expense of long-term economic stability is referred to as generating a political business cycle. This theory of political business cycles has not generated much empirical support from myriad studies that concentrate on contemporary elections. Perhaps due to the lack of supporting evidence, and the belief that such manipulations were not possible before the advent of activist fiscal policy ushered in during the Keynesian revolution, there has been little attempt to test for political cycles in historical elections. There are, however, a few studies that do so, although their time samples and methodology differ widely.

National-Level Evidence on Historical Political Business Cycles

Adopting the standard procedure used in the empirical studies of contemporary political business cycles, Heckelman and Whaples (1996) test for cycles during the period after the Civil War and before the Great Depression. They find little evidence that either nominal or real GNP, or the GNP deflator, was significantly different than the expected level during the year of, or the year after, a presidential election from 1869-1929.

Davidson, Fratianni, and von Hagen (1990) employ a long time series from 1905-1984. They fail to find consistent evidence of a traditional political business cycle, or systematic differences by party control, of policy targets or policy measures during this time. However, they also test for alterations to the economy based on recent previous conditions and find that trends were significantly altered prior to elections only when macroeconomic outcomes in the recent past had been unfavorable to the incumbent: rising inflation, a rising rate of unemployment, a growing deficit, and a decline in monetary growth. In contrast, there were no changes in the dynamics when previous outcomes were favorable (p. 47), meaning, for example, that declining unemployment did not suddenly fall by an even larger degree just prior to the election. They find no electoral effects on the growth of real per capita GNP. They also present limited evidence that unemployment and inflation patterns differ by party control, but only following recent unfavorable outcomes in each, and the changes are further limited to the post-World War II period.

Klein (1996) takes a different approach. Instead of focusing on the actual values of the economic variables, Klein analyzes business cycle turning points, as identified by the National Bureau of Economic Research. He finds that 26 of the 34 presidential elections held from 1854-1990 were during an identified expansionary period. While expansions typically end in the period right after an election, he does not find that contractions are more likely to end in the period before an election. Thus, his evidence for political business cycles is somewhat mixed. Klein also finds that turning points differ by party control. Expansions are more likely to end following Republican victories, and contractions are more likely to end soon after Democratic victories. These partisan findings are much stronger after World War I.

It is perhaps not surprising that partisan influences on the economy are not stable during the long time series studies. In the earlier part of the Davidson-Fratianni-von Hagen, and Klein studies the Republicans, as the party of Lincoln and McKinley, had a large constituency base comprised of the industrial workers, and tended to support trade protectionism, the opposite of contemporary Republicans. It may still be true that significant differences in the structure of the business cycle occurred depending on which political party controlled policy, even in the period prior to the world wars, but since neither study examined these earlier time periods in isolation as they did for the later time period, that remains speculative.

Richard Nixon’s First Term

The strongest evidence for a political business cycle remains the first term of the Nixon administration. Some scholars have even argued this inspired Nordhaus’s (1975) early theoretical model of the political business cycle (Keech 1995, p.54) on which most empirical tests are based. Keller and May (1984) present a case study of the policy cycle driven by Nixon from 1969-1972, summarizing his use of contractionary monetary and fiscal policy in the first two years, followed by wage and price controls in mid-1971, and finally rapid fiscal expansion and high growth in late 1971 and 1972. They claim only the expansion portion of the cycle is evidence of electoral manipulation, and that the early contraction is merely consistent with modern Republican Party ideology. Although the latter is true, it does not disprove the conclusion of almost every other political business cycle scholar since it is not possible to pinpoint the motivation behind the policy change. Given, the abandonment of ideology displayed by Nixon in the second half of his term, it seems more likely the entire cycle, consistent with the predictions of a political policy cycle, was driven by electoral considerations rather than ideology. 1

State-Level Evidence

Little evidence has been accumulated for state-level political business cycles. An exception for historical gubernatorial elections is Heckelman (1998). Comparing the gainful employment rates across states with and without a gubernatorial election in the decennial years of 1870-1910, the evidence supports the notion of a political employment cycle for the states. This evidence is limited to the case of pooling all the years together, and may be driven by the strong result found for 1890. There is no further evidence of a federal employment cycle during the presidential election years of 1880 and 1900, or assistance directed at those states where the governor was of the same party as the sitting president.

Policy Cycles

Empirical studies of contemporary political cycles have turned more attention recently to policy, rather than business, cycles since policy instruments would need to be manipulated in order to affect the economy. Lack of evidence of political business cycles would be consistent either with no attempted manipulation, or policy cycles that did not have the desired effect due to other exogenous factors and the crudity of macroeconomic policy. There does appear to be strong evidence of modern policy cycles even when political business cycle evidence is weak or non-existent. (See for example Alesina, Roubini and Cohen 1999.) With the exception of the well-documented Nixonion policy cycles, there has been no attempt to document the occurrence of historical policy cycles. This remains the largest gap in the empirical literature and should prove a fertile ground for exploration.

New Deal Spending

There is, however, a related literature which examines New Deal spending from a political angle. Beginning with Gavin Wright’s (1974) study, scholars have generally concluded that allocations of spending across the states were directed more by Roosevelt’s electoral concerns than by economic need (Couch and Shughart 1998), since a disproportionate share of federal spending under the New Deal went to the potential swing states. Anderson and Tollison (1991) find that spending was also heavily influenced by congressional self-interest. In contrast, Wallis (1987) presents evidence that both political interest and economic need were important by noting that payments to Southern states were lower in part due to their reluctance to take advantage of federal matching grants. Most recently, Couch and Shughart (2000) test the matching grant hypothesis on one component of New Deal spending, namely the Works Progress Administration (WPA). They find that federal land ownership, political self-interest, and state economic need were all contributory factors to determining the allocation of WPA spending across the states. Wallis (1998) also showed that much of the prior empirical analysis of New Deal distributions depended critically on the inclusion or exclusion of Nevada, a state unique in its low population density and large proportion of federal land. The political aspects of New Deal spending are also summarized in the Fishback’s (1999) review. Fleck (2001) and Wallis (2001) provide the most recent exchange on this subject.

References

Alesina, Alberto, Nouriel Roubini, and Gerald D. Cohen. Political Cycles and the Macroeconomy, Cambridge, MA: MIT Press, 1997.

Anderson, Gary M. and Robert D. Tollison. “Congressional Influence and Patterns of New Deal Spending.” Journal of Law and Economics 34, (1991): 161-175.

Couch, Jim F. and William F. Shughart. The Political Economy of the New Deal, Cheltenham, UK: Edward Elgar, 1998.

Couch, Jim F. and William F. Shughart. “New Deal Spending and the States: The Politics of Public Works.” In Public Choice Interpretations of American Economic History, edited by Jac C. Heckelman, John C. Moorhouse, and Robert Whaples, 105-122. Norwell, MA: Kluwer Academic Publishers.

Davidson, Lawrence S., Michele Fratianni and Jurgen von Hagen. “Testing for Political Business Cycles.” Journal of Policy Modeling 12, (1992): 35-59.

Drazen, Allan. Political Economy in Macroeconomics. Princeton: Princeton University Press, 2000.

Fair, Ray. “The Effects of Economic Events on Votes for the President.” Review of Economics and Statistics 60, (1978): 159-173.

Fishback, Price V. “Review of Jim Couch and William F. Shughart II, The Political Economy of the New Deal.” Economic History Services, June 21, 1999. URL: htp://www.eh.net/bookreviews/library/0164.shtml

Fleck, Robert K. “Population, Land, Economic Conditions, and the Allocation of New Deal Spending.” Explorations in Economic History 38, (2001): 296-304.

Heckelman, Jac C. “Employment and Gubernatorial Elections during the Gilded Age.” Economics and Politics 10, (1998): 297-309.

Heckelman, Jac and Robert Whaples. “Political Business Cycles before the Great Depression.” Economics Letters 51, (1996): 247-251.

Keech, William R. Economic Politics: The Costs of Democracy. New York: Cambridge University Press, 1995.

Keller, Robert R. and Ann M. May. “The Presidential Political Business Cycle of 1972.” Journal of Economic History 44, (1984): 265-71.

Klein, Michael W. “Timing Is All: Elections and the Duration of the United States Business Cycles.” Journal of Money, Credit and Banking 28, (1996) 84-101.

Nordhaus, William D. “The Political Business Cycle.” Review of Economic Studies 42, (1975) 169-190.

Wallis, John J. “Employment, Politics, and Economic Recovery during the Great Depression.” Review of Economics and Statistics 69, (1987): 516-520.

Wallis, John J. “The Political Economy of New Deal Spending Revisited, Again: With and without Nevada.” Explorations in Economic History 35, (1998): 140-170.

Wallis, John J. “The Political Economy of New Deal Spending, Yet Again: A Reply to Fleck.” Explorations in Economic History 38, (2001): 305-314.

Wright, Gavin. “The Political Economy of New Deal Spending.” Review of Economics and Statistics 56, (1974): 30-38.

1 See also Drazen (2000, pp. 231-232) for a brief discussion of Nixon’s manipulation of taxation policy and Social Security payments.

Citation: Heckelman, Jac. “Historical Political Business Cycles in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL
http://eh.net/encyclopedia/historical-political-business-cycles-in-the-united-states/

Origins of Commercial Banking in the United States, 1781-1830

Robert E. Wright, University of Virginia

Early U.S. commercial banks were for-profit business firms, usually structured as joint-stock companies. Many, but by no means all, obtained corporate charters from their respective state legislatures. Although politically controversial, commercial banks, the number and assets of which grew quickly after 1800, played a key role in early U.S. economic growth.1 Commercial banks, savings banks, insurance companies and other financial intermediaries helped to fuel growth by channeling wealth from savers to entrepreneurs. Those entrepreneurs used the loans to increase the profitability of their businesses and hence the efficiency of the overall economy.

Description of the Early Commercial Banking Business

As financial intermediaries, commercial banks pooled the wealth of a large number of savers and lent fractions of that pool to a diverse group of enterprising business firms. The best way to understand how early commercial banks functioned is to examine a typical bank balance sheet.2 Banks essentially borrowed wealth from their liability holders and re-lent that wealth to the issuers of their assets. Banks profited from the difference between the cost of their liabilities and the net return from their assets.

Assets of a Typical Commercial Bank

A typical U.S. commercial bank in the late eighteenth and early nineteenth centuries owned assets such as specie, the notes and deposits of other banks, commercial paper, public securities, mortgages, and real estate. Investment in real estate was minimal, usually simply to provide the bank with an office in which to conduct business. Commercial banks used specie, i.e. gold and silver (usually minted into coins but sometimes in the form of bars or bullion), and their claims on other banks (notes and/or deposits) to pay their creditors (liability holders). They also owned public securities like government bonds and corporate equities. Sometimes they owned a small sum of mortgages, long-term loans collateralized by real property. Most bank assets, however, were discount loans collateralized by commercial paper, i.e. bills of exchange and promissory notes “discounted” at the bank by borrowers.

Discount Loans Described

Most bank loans were “discount” loans, not “simple” loans. Unlike a simple loan, where the interest and principal fall due when the loan matures, a discount requires only the repayment of the principal on the due date. That is because the borrower receives only the discounted present value of the principal at the time of the loan, not the full principal sum.

For example, with a simple loan of $100 at 6 percent interest, of exactly one year’s duration, the borrower receives $100 today and must repay the lender $106 in one year. With a discount loan, the borrower repays $100 at the end of the year but receives only $94.34 today.3

Commercial Bank Liabilities

Commercial banks acquired wealth to purchase assets by issuing several types of liabilities. Most early banks were joint-stock companies, so they issued equities (“stock”) in an initial public offering (IPO). Those common shares were not redeemable. In other words, stockholders could not demand that the bank exchange their shares for cash. Stockholders who wished to recoup their investments could do so only by selling their shares to other investors in the secondary “stock” market. Because its common shares were irredeemable, a bank’s “capital stock” was its most certain source of funds.

Holders of other types of bank liabilities, including banknotes and checking deposits, could redeem their claims during the issuing bank’s open hours of operation, which were typically four to six hours a day, Monday through Saturday. A holder of a deposit liability could “cash out” by physically withdrawing funds (in banknotes or specie) or by writing a check to a third party against his or her deposit balance. A holder of a banknote, an engraved promissory note payable to the bearer very similar to today’s Federal Reserve notes,4 could physically visit the issuing bank to redeem the sum printed on the note in specie or other current funds, at the holder’s option. Or, a banknote holder could simply use the notes as currency, to make retail purchases, repay debts, make loans, etc.

After selling its shares to investors, and perhaps attracting some deposits, early banks would begin to accept discount loan applications. Successful applicants would receive the loan as a credit in their checking accounts, in banknotes, in specie, or in some combination thereof. Those banknotes, deposits, and specie traveled from person to person to make purchases and remittances. Eventually, the notes and deposits returned to the bank of issue for payment.

Balance Sheet Management

Early banks had to manage their balance sheets carefully. They “failed” or “broke,” i.e. became legally insolvent, if they could not meet the demands of liability holders with prompt specie payment. Bankers, therefore, had to keep ample amounts of gold and silver in their banks’ vaults in order to remain in business. Because specie paid no interest, however, bankers had to be careful not to accumulate too much of the precious metals lest they sacrifice the bank’s profitability to its safety. Interest-bearing public securities, like U.S. Six Percent bonds, often served as “secondary reserves” that generated income but that bankers could quickly sell to raise cash, if necessary.

When bankers found that their reserves were declining too precipitously they slowed or stopped discounting until reserve levels returned to safe levels. Discount loans were not callable.5 Bankers therefore made discounts for short terms only, usually from a few days to six months. If the bank’s condition allowed, borrowers could negotiate a new discount to repay one coming due, effectively extending the term of the loan. If the bank’s condition precluded further extension of the loan, however, borrowers had to pay up or face a lawsuit. Bankers quickly learned to stagger loan due dates so that a steady stream of discounts was constantly coming up for renewal. In that way, bankers could, if necessary, quickly reduce the outstanding volume of discounts by denying renewals.

Reduction of Information Asymmetry

Early bankers maintained profitability by keeping losses from defaults less than the gains from interest revenues.6 They kept defaults at an acceptably low level by reducing what financial theorists call “information asymmetry.” The two major types of information asymmetry are adverse selection, which occurs before a contract is made, and moral hazard, which occurs after contract completion. The information is asymmetrical or unequal because loan applicants and borrowers naturally know more about their creditworthiness than lenders do. (More generally, sellers know more about their goods and services than buyers do.) Bankers, in other words, must create information about loan applicants and borrowers so that they can assess the risk of default and make a rational decision about whether to make or to continue a loan.

Adverse Selection

Adverse selection arises from the fact that risky borrowers are more eager for loans, especially at high interest rates, than safe borrowers. As Adam Smith put it, interest rates “so high as eight or ten per cent” attract only “prodigals and projectors, who alone would be willing to give this high interest.” “Sober people,” he continued, “who will give for the use of money no more than a part of what they are likely to make by the use of it, would not venture into the competition.”

Adverse selection is also known as the “lemons problem” because a classic example of it occurs in the unintermediated market for used cars. Potential buyers have difficulty discerning good cars, the “peaches,” from breakdown-prone cars, the “lemons.” Sellers naturally know whether their cars are peaches or lemons. So information about the car is asymmetrical — the seller knows the true value but the buyer does not. Potential buyers quite rationally offer the average market price for cars of a particular make, model, and mileage. An owner of a peach naturally scoffs at the average offer. A lemon owner, on the other hand, will jump at the opportunity to unload his heap for more than its real value. If we recall that borrowers are essentially sellers of securities called loans, the adverse selection problem in financial markets should be clear. Lenders that do not reduce information asymmetry will purchase only lemon-like loans because their offer of a loan at average interest will appear too dear to good borrowers but will look quite appealing to risky “prodigals and projectors.”

Moral Hazard

Moral hazard arises from the fact that people are basically self-interested. If given the opportunity, they will renege on contracts by engaging in risky activities with, or even outright stealing, lenders’ wealth. For instance, a borrower might decide to use a loan to try his luck at the blackjack table in Atlantic City rather than to purchase a computer or other efficiency-increasing tool for his business. Another borrower might have the means to repay the loan but default on it anyway so that she can use the resources to take a vacation to Aruba.

In order to reduce the risk of default due to information asymmetry, lenders must create information about borrowers. Early banks created information by screening discount applicants to reduce adverse selection and by monitoring loan recipients and requiring collateral to reduce moral hazard. Screening procedures included probing the applicant’s credit history and current financial condition. Monitoring procedures included the evaluation of the flow of funds through the borrower’s checking account and the negotiation of restrictive covenants specifying the uses to which a particular loan would be put. Banks could also require borrowers to post collateral, i.e. property they could seize in case of default. Real estate, slaves, co-signers, and financial securities were common forms of collateral.

A Short History of Early American Commercial Banks

Colonial Experiments

Colonial America witnessed the formation of several dozen “banks,” only a few of which were commercial banks. Most of the colonial banks were “land banks” that made mortgage loans. Additionally, many of them were government agencies and not businesses. All of the handful of colonial banks that could rightly be called commercial banks, i.e. that discounted short-term commercial paper, were small and short-lived. Some, like that of Alexander Cummings, were fraudulent. Others, like that of Philadelphia merchants Robert Morris and Thomas Willing, ran afoul of English laws and had to be abandoned.

The First U.S. Commercial Banks

The development of America’s commercial banking sector, therefore, had to await the Revolution. No longer blocked by English law, Morris, Willing, and other prominent Philadelphia merchants moved to establish a joint-stock commercial bank. The young republic’s shaky war finances added urgency to the bankers’ request to charter a bank, a request that Congress and several state legislatures soon accepted. By 1782, that new bank, the Bank of North America, had granted a significant volume of loans to both the public and private sectors. New Yorkers, led by Alexander Hamilton, and Bostonians, led by William Phillips, were not to be outdone and by early 1784 had created their own commercial banks. By the end of the eighteenth century, mercantile leaders in over a dozen other cities had also formed commercial banks. (See Table 1.)

Table 1:
Names, Locations, Charter or Establishment Dates, and Authorized Capitals of the First U.S. Commercial Banks, 1781-1799

Name Location Year of Charter (Year of Establishment) Authorized Capital (in U.S. dollars)
Bank of North America Philadelphia, Pennsylvania 1781*/1782/1786** $400,000 (increased to $2,000,000 in 1787)
The Bank of New York Manhattan, New York (1784) 1791 $1,000,000
The Massachusetts Bank Boston, Massachusetts 1784 $300,000
The Bank of Maryland Baltimore, Maryland 1790 $300,000
The Bank of the United States Philadelphia, Pennsylvania 1791* $10,000,000
The Bank of Providence Providence, Rhode Island 1791 $500,000
New Hampshire Bank Portsmouth, New Hampshire 1792 $200,000
The Bank of Albany Albany, New York 1792 $260,000
Hartford Bank Hartford, Connecticut 1792 $100,000
Union Bank New London, Connecticut 1792 $50,000-100,000
Union Bank Boston, Massachusetts 1792 $400,000-800,000
New Haven Bank New Haven, Connecticut 1792 $100,000 (increased to $400,000 in 1795)
Bank of Alexandria Alexandria, Virginia 1792 $150,000 (increased to $500,000 in 1795)
Essex Bank Salem, Massachusetts (1792) 1799 $100,000-400,000
Bank of Richmond Richmond, Virginia (1792) n/a
Bank of South Carolina Charleston, South Carolina (1792) 1801 $200,000
Bank of Columbia Hudson, New York 1793 $160,000
Bank of Pennsylvania Philadelphia, Pennsylvania 1793 $3,000,000
Bank of Columbia Washington, D.C. 1793 $1,000,000
Nantucket Bank Nantucket, Massachusetts 1795 $40,000-100,000
Merrimack Bank Newburyport, Massachusetts 1795 $70,000-150,000
Middletown Bank Middletown, Connecticut 1795 $100,000-400,000
Bank of Baltimore Baltimore, Maryland 1795 $1,200,000
Bank of Rhode Island Newport, Rhode Island 1795 $500,000
Bank of Delaware Wilmington, Delaware 1796 $500,000
Norwich Bank Norwich, Connecticut 1796 $75,000-200,000
Portland Bank Portland, Maine 1799 $300,000
Manhattan Company New York, New York 1799# $2,000,000

Source: Fenstermaker (1964); Davis (1917)

* = National charter.
** = The Bank of North America gained a second charter in 1786 after its original Pennsylvania state charter was revoked. Pennsylvania, Massachusetts, and New York chartered the bank in 1782.
# = This firm was chartered as a water utility company but began banking operations almost immediately.

Banking and Politics

The first U.S. commercial banks helped early national businessmen to overcome a “crisis of liquidity,” a classic postwar liquidity crisis caused by a shortage of cash, and an increased emphasis on the notion that “time is money.” Many colonists had been content to allow debts to remain unsettled for years and even decades. After experiencing the devastating inflation of the Revolution, however, many Americans came to see prompt payment of debts and strict performance of contracts as virtues. Banks helped to condition individuals and firms to the new, stricter business procedures.

Early U.S. commercial banks had political roots as well. Many Revolutionary elites saw banks, and other modern financial institutions, as a means of social control. The power vacuum left after the withdrawal of British troops and leading Loyalist families had to be filled, and many members of the commercial elite wished to fill it and to justify their control with an ideology of meritocracy. By providing loans to entrepreneurs based on the merits of their businesses, and not their genealogies, banks and other financial intermediaries helped to spread the notion that wealth and power should be allocated to the most able members of post-Revolutionary society, not to the oldest or best groomed families.

Growth of the Commercial Banking Sector

After 1800, the number, authorized capital, and assets of commercial banks grew rapidly. (See Table 2.) As early as 1820, the assets of U.S. commercial banks equaled about 50 percent of U.S. aggregate output, a figure that the commercial banking sectors of most of the world’s nations had not achieved by 1990.

Table 2:
Numbers, Authorized Capitals, and Estimated Assets of Incorporated U.S. Commercial Banks, 1800-1830

Year No. Banks Authorized Capital (in millions $U.S.) Estimated Assets (in millions $U.S.)
1800 29 27.42 49.74
1801 33 29.17 52.66
1802 36 30.03 50.00
1803 54 34.90 58.69
1804 65 41.17 67.07
1805 72 48.87 82.39
1806 79 51.34 94.11
1807 84 53.43 90.47
1808 87 51.49 92.04
1809 93 55.19 100.23
1810 103 66.19 108.87
1811 118 76.29 142.65
1812 143 84.49 161.89
1813 147 87.00 187.23
1814 202 110.02 233.53
1815 212 115.23 197.16
1816 233 158.98 270.30
1817 263 172.84 316.47
1818 339 195.31 331.41
1819 342 195.98 349.66
1820 328 194.60 341.42
1821 274 181.23 345.93
1822 268 177.53 307.86
1823 275 173.67 283.10
1824 301 185.75 328.16
1825 331 191.08 347.65
1826 332 190.98 349.60
1827 334 192.51 379.03
1828 356 197.41 344.56
1829 370 201.06 349.72
1830 382 205.40 403.45

Sources: For total banks and authorized bank capital, see Fenstermaker (1965). I added the Bank of the United States and the Second Bank of the United States to his figures. I estimated assets by multiplying the total authorized capital by the average ratio of actual capital to assets from a large sample of balance sheet data.

Commercial banks caused considerable political controversy in the U.S. As the first large, usually corporate, for-profit business firms, banks took the brunt of reactionary “agrarian” rhetoric designed to thwart, or at least slow down, the post-Revolution modernization of the U.S. economy. Early bank critics, however, failed to see that their own reactionary policies caused or exacerbated the supposed evils of the banking system.

For instance, critics argued that the lending decisions of early banks were politically-motivated and skewed in favor of rich merchants. Such was indeed the case. Overly stringent laws, usually championed by the agrarian critics themselves, forced bankers into that lending pattern. Many early bank charters forbade banks to raise additional equity capital or to increase interest rates above a low ceiling or usury cap, usually 6 percent per year. When market interest rates were above the usury cap, as they almost always were, banks were naturally swamped with discount applications. Forbidden by law to increase interest rates or to raise additional equity capital, banks were forced to ration credit. They naturally lent to the safest borrowers, those most known to the bank and those with the highest wealth levels.

Early banks were extremely profitable and therefore aroused considerable envy. Critics claimed that bank dividends greater than six percent were prima facie evidence that banks routinely made discounts at illegally high rates. In fact, banks earned more than they charged on discounts because they lent out more, often substantially more, than their capital base. It was not unusual, for example, for a bank with $1,000,000 equity capital to have an average of $2,000,000 on loan. The six percent interest on that sum would generate $120,000 of gross revenue, minus say $20,000 for operating expenses, leaving $100,000 to be divided among stockholders, a dividend of ten percent. More highly leveraged banks, i.e. banks with higher asset to capital ratios, could earn even more.

Early banks also caused considerable political controversy when they attempted to gain a charter, a special act of legislation that granted corporate privileges such as limited stockholder liability, the ability to sue in courts of law in the name of the bank, etc. Because early banks were lucrative, politicians and opposing interest groups fought each other bitterly over charters. Rival commercial factions sought to establish the first bank in emerging commercial centers while rival political parties struggled to gain credit for establishing new banking facilities. Politicians soon discovered that they could extract overt bonuses, taxes, and even illegal bribes from bank charter applicants. Again, critics unfairly blamed banks for problems over which bankers had little control.

The Economic Importance of Early U.S. Commercial Banks

Despite the efforts of a few critics, most Americans rejected anti-bank rhetoric and supported the controlled growth of the commercial banking sector. They did so because they understood what some modern economists do not, namely, that commercial banks helped to increase per capita aggregate output. Unfortunately, the discussion of banks’ role in economic growth has been much muddied by monetary issues. Banknotes circulated as cash, just as today’s Federal Reserve notes do. Most scholars, therefore, have concentrated on early banks’ role in the monetary system. In general, early banks caused the money supply to be procyclical. In other words, they made the money supply expand rapidly during business cycle “booms,” thereby causing inflation, and they made the money supply contract sharply during recessions, thereby causing ruinous price deflation.

The economic importance of early banks, therefore, lies not in their monetary role but in their capacity as financial intermediaries. At first glance, intermediation may seem a rather innocuous process — lenders are matched to borrowers. Upon further inspection, however, it is clear that intermediation is a crucial economic process. Economies devoid of financial intermediation, like those of colonial America, grow slowly because firms with profitable ideas find it difficult to locate financial backers. Without intermediaries, search costs, i.e. the costs of finding a counterparty, and information creation costs, i.e. the costs of reducing information asymmetry (adverse selection and moral hazard), are so high that few loans are made. Profitable ideas cannot be implemented and the economy stagnates.

Intermediaries reduce both search and information costs. Rather than hunt blindly for counterparties, for instance, both savers and entrepreneurs needed only to find the local bank, a major reduction in search costs. Additionally, banks, as large, specialized lenders, were able to reduce information asymmetry more efficiently than smaller, less-specialized lenders, like private individuals.

By lowering the total cost of borrowing, commercial banks increased the volume of loans made and hence the number of profitable ideas that entrepreneurs brought to fruition. Commercial banks, for instance, allowed firms to implement new technologies, to increase labor specialization, and to take advantage of economies of scale and scope. As those firms grew more profitable, they created new wealth, driving economic growth.

Additional Reading

Important recent books about early U.S. commercial banking include:

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. New York: Cambridge University Press. 2000.

Cowen, David J. The Origins and Economic Impact of the First Bank of the United States, 1791-1797. New York: Garland Publishing, 2000.

Lamoreaux, Naomi. Insider Lending: Banks, Personal Connection, and Economic Development in Industrial New England. New York: Cambridge University Press, 1994.

Wright, Robert E. Origins of Commercial Banking in America, 1750-1800. Lanham, MD: Rowman & Littlefield. 2001.

Important recent overviews of the wider early U.S. financial sector are:

Perkins, Edwin J. American Public Finance and Financial Services, 1700-1815. Columbus: Ohio State University Press, 1994.

Sylla, Richard. “U.S. Securities Markets and the Banking System, 1790-1840.” Federal Reserve Bank of St. Louis Review 80 (1998): 83-104.

Wright, Robert. The Wealth of Nations Rediscovered: Integration and Expansion in American Financial Markets, 1780-1850. New York: Cambridge University Press. 2002.

Classic histories of early U.S. banks and banking include:

Cleveland, Harold van B., Thomas Huertas, et al. Citibank, 1812-1970. Cambridge: Harvard University Press, 1985.

Davis, Joseph S. Essays in the Earlier History of American Corporations. New York: Russell & Russell, 1917.

Eliason, Adolph O. “The Rise of Commercial Banking Institutions in the United States.” Ph.D., diss. University of Minnesota, 1901.

Fenstermaker, J. Van. The Development of American Commercial Banking: 1782-1837. Kent,Ohio: Kent State University, 1965.

Fenstermaker, J. Van and John E. Filer. “Impact of the First and Second Banks of the United States and the Suffolk System on New England Bank Money: 1791-1837.” Journal of Money, Credit and Banking 18 (1986): 28-40.

Gras, N. S. B. The Massachusetts First National Bank of Boston, 1784-1934. Cambridge: Harvard University Press, 1937.

Green, George. Finance and Economic Development in the Old South: Louisiana Banking, 1804-1861. Stanford: Stanford University Press, 1972.

Hammond, Bray. Banks and Politics in America, from the Revolution until the Civil War. Princeton: Princeton University Press, 1957.

Hedges, Joseph Edward. Commercial Banking and the Stock Market Before 1863. Baltimore: Johns Hopkins Press, 1938.

Hunter, Gregory. The Manhattan Company: Managing a Multi-Unit Corporation in New York, 1799-1842. New York: Garland Publishing, 1989.

Redlich, Fritz. The Molding of American Banking: Men and Ideas. New York. Johnson Reprint Corporation, 1968.

Schweikart, Larry. Banking in the American South from the Age of Jackson to Reconstruction. Baton Rouge: Louisiana State University Press, 1987.

Smith, Walter Buckingham. Economic Aspects of the Second Bank of the United States. Cambridge: Harvard University Press, 1953.

Wainwright, Nicholas B. History of the Philadelphia National Bank: A Century and a Half of Philadelphia Banking, 1803-1953. Philadelphia: Philadelphia National Bank, 1953.

1 Which is to say that they increased real per capita aggregate output. Aggregate output is the total dollar value of goods and services produced in a year. It can be measured in different ways, the two most widely used of which are Gross National Product (GNP) and Gross Domestic Product (GDP). The term per capita refers to the total population. Aggregate output may increase simply because of additional people, so economists must take population growth into consideration. Similarly, nominal aggregate output might increase simply because of price inflation. Real aggregate output means output adjusted to account for price changes (inflation or deflation). Real per capita aggregate output, therefore, measures the economy’s “size,” adjusting for changes in population and prices.

2 A balance sheet is simply a summary financial statement that lists what a firm owns (its assets) as well as what it owes (its liabilities).

3 Early bankers used the formula for present value familiar to us today: PV = FV/(1+i)n where PV = present value (sum received today), FV = future value (principal sum), i = annual interest rate, and n = the number of compounding periods, which in this example is one. So, PV = 100/1.06 = 94.3396 or $94.34.

4

5 In other words, banks could not demand early repayment from borrowers.

6In order to maintain bank revenues, bankers are willing, under competitive conditions, to take some risks and therefore to suffer some defaults. For example, making a simple year-long loan for $100 at 10 percent per annum, if the banker determines that the borrower represents, say, only a 5 percent chance of default, is clearly superior to not lending at all and foregoing the $10 interest revenue. Early U.S. banks, however, rarely faced such risk-return tradeoffs. Because the supply of bank loans was inadequate to meet the huge demand for bank loans, and because banks were constrained by usury law from raising their interest rates higher than certain low levels, usually around 6 to 7 percent, bankers could afford to lend to only the safest risks. Early bankers, in other words, usually faced the problem of too many good borrowers, not too few.

Citation: Wright, Robert. “Origins of Commercial Banking in the United States, 1781-1830″. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL
http://eh.net/encyclopedia/origins-of-commercial-banking-in-the-united-states-1781-1830/

US Banking History, Civil War to World War II

Richard S. Grossman, Wesleyan University

The National Banking Era Begins, 1863

The National Banking Acts of 1863 and 1864

The National Banking era was ushered in by the passage of the National Currency (later renamed the National Banking) Acts of 1863 and 1864. The Acts marked a decisive change in the monetary system, confirmed a quarter-century-old trend in bank chartering arrangements, and also played a role in financing the Civil War.

Provision of a Uniform National Currency

As its original title suggests, one of the main objectives of the legislation was to provide a uniform national currency. Prior to the establishment of the national banking system, the national currency supply consisted of a confusing patchwork of bank notes issued under a variety of rules by banks chartered under different state laws. Notes of sound banks circulated side-by-side with notes of banks in financial trouble, as well as those of banks that had failed (not to mention forgeries). In fact, bank notes frequently traded at a discount, so that a one-dollar note of a smaller, less well-known bank (or, for that matter, of a bank at some distance) would likely have been valued at less than one dollar by someone receiving it in a transaction. The confusion was such as to lead to the publication of magazines that specialized in printing pictures, descriptions, and prices of various bank notes, along with information on whether or not the issuing bank was still in existence.

Under the legislation, newly created national banks were empowered to issue national bank notes backed by a deposit of US Treasury securities with their chartering agency, the Department of the Treasury’s Comptroller of the Currency. The legislation also placed a tax on notes issued by state banks, effectively driving them out of circulation. Bank notes were of uniform design and, in fact, were printed by the government. The amount of bank notes a national bank was allowed to issue depended upon the bank’s capital (which was also regulated by the act) and the amount of bonds it deposited with the Comptroller. The relationship between bank capital, bonds held, and note issue was changed by laws in 1874, 1882, and 1900 (Cagan 1963, James 1976, and Krooss 1969).

Federal Chartering of Banks

A second element of the Act was the introduction bank charters issued by the federal government. From the earliest days of the Republic, banking had been considered primarily the province of state governments.[1] Originally, individuals who wished to obtain banking charters had to approach the state legislature, which then decided if the applicant was of sufficient moral standing to warrant a charter and if the region in question needed an additional bank. These decisions may well have been influenced by bribes and political pressure, both by the prospective banker and by established bankers who may have hoped to block the entry of new competitors.

An important shift in state banking practice had begun with the introduction of free banking laws in the 1830s. Beginning with laws passed in Michigan (1837) and New York (1838), free banking laws changed the way banks obtained charters. Rather than apply to the state legislature and receive a decision on a case-by-case basis, individuals could obtain a charter by filling out some paperwork and depositing a prescribed amount of specified bonds with the state authorities. By 1860, over one half of the states had enacted some type of free banking law (Rockoff 1975). By regularizing and removing legislative discretion from chartering decisions, the National Banking Acts spread free banking on a national level.

Financing the Civil War

A third important element of the National Banking Acts was that they helped the Union government pay for the war. Adopted in the midst of the Civil War, the requirement for banks to deposit US bonds with the Comptroller maintained the demand for Union securities and helped finance the war effort.[2]

Development and Competition with State Banks

The National Banking system grew rapidly at first (Table 1). Much of the increase came at the expense of the state-chartered banking systems, which contracted over the same period, largely because they were no longer able to issue notes. The expansion of the new system did not lead to the extinction of the old: the growth of deposit-taking, combined with less stringent capital requirements, convinced many state bankers that they could do without either the ability to issue banknotes or a federal charter, and led to a resurgence of state banking in the 1880s and 1890s. Under the original acts, the minimum capital requirement for national banks was $50,000 for banks in towns with a population of 6000 or less, $100,000 for banks in cities with a population ranging from 6000 to 50,000, and $200,000 for banks in cities with populations exceeding 50,000. By contrast, the minimum capital requirement for a state bank was often as low as $10,000. The difference in capital requirements may have been an important difference in the resurgence of state banking: in 1877 only about one-fifth of state banks had a capital of less than $50,000; by 1899 the proportion was over three-fifths. Recognizing this competition, the Gold Standard Act of 1900 reduced the minimum capital necessary for national banks. It is questionable whether regulatory competition (both between states and between states and the federal government) kept regulators on their toes or encouraged a “race to the bottom,” that is, lower and looser standards.

Table 1: Numbers and Assets of National and State Banks, 1863-1913

Number of Banks Assets of Banks ($millions)
National Banks State Banks National Banks State Banks
1863 66 1466 16.8 1185.4
1864 467 1089 252.2 725.9
1865 1294 349 1126.5 165.8
1866 1634 297 1476.3 154.8
1867 1636 272 1494.5 151.9
1868 1640 247 1572.1 154.6
1869 1619 259 1564.1 156.0
1870 1612 325 1565.7 201.5
1871 1723 452 1703.4 259.6
1872 1853 566 1770.8 264.5
1873 1968 277 1851.2 178.9
1874 1983 368 1851.8 237.4
1875 2076 586 1913.2 395.2
1876 2091 671 1825.7 405.9
1877 2078 631 1774.3 506.9
1878 2056 510 1770.4 388.8
1879 2048 648 2019.8 427.6
1880 2076 650 2035.4 481.8
1881 2115 683 2325.8 575.5
1882 2239 704 2344.3 633.8
1883 2417 788 2364.8 724.5
1884 2625 852 2282.5 760.9
1885 2689 1015 2421.8 802.0
1886 2809 891 2474.5 807.0
1887 3014 1471 2636.2 1003.0
1888 3120 1523 2731.4 1055.0
1889 3239 1791 2937.9 1237.3
1890 3484 2250 3061.7 1374.6
1891 3652 2743 3113.4 1442.0
1892 3759 3359 3493.7 1640.0
1893 3807 3807 3213.2 1857.0
1894 3770 3810 3422.0 1782.0
1895 3715 4016 3470.5 1954.0
1896 3689 3968 3353.7 1962.0
1897 3610 4108 3563.4 1981.0
1898 3582 4211 3977.6 2298.0
1899 3583 4451 4708.8 2707.0
1900 3732 4659 4944.1 3090.0
1901 4165 5317 5675.9 3776.0
1902 4535 5814 6008.7 4292.0
1903 4939 6493 6286.9 4790.0
1904 5331 7508 6655.9 5244.0
1905 5668 8477 7327.8 6056.0
1906 6053 9604 7784.2 6636.0
1907 6429 10761 8476.5 7190.0
1908 6824 12062 8714.0 6898.0
1909 6926 12398 9471.7 7407.0
1910 7145 13257 9896.6 7911.0
1911 7277 14115 10383 8412.0
1912 7372 14791 10861.7 9005.0
1913 7473 15526 11036.9 9267.0

Source: U.S. Department of the Treasury. Annual Report of the Comptroller of the Currency (1931), pp. 3, 5. State bank columns include data on state-chartered commercial banks and loan and trust companies.

Capital Requirements and Interest Rates

The relatively high minimum capital requirement for national banks may have contributed to regional interest rate differentials in the post-Civil War era. The period from the Civil War through World War I saw a substantial decline in interregional interest rate differentials. According to Lance Davis (1965), the decline in difference between regional interest rates can be explained by the development and spread of the commercial paper market, which increased the interregional mobility of funds. Richard Sylla (1969) argues that the high minimum capital requirements established by the National Banking Acts represented barriers to entry and therefore led to local monopolies by note-issuing national banks. These local monopolies in capital-short regions led to the persistence of interest rate spreads.[3] (See also James 1976b.)

Bank Failures

Financial crises were a common occurrence in the National Banking era. O.M.W. Sprague (1910) classified the main financial crises during the era as occurring in 1873, 1884, 1890, 1893, and 1907, with those of 1873, 1893, and 1907 being regarded as full-fledged crises and those of 1884 and 1890 as less severe.

Contemporary observers complained of both the persistence and ill effects of bank failures under the new system.[4] The number and assets of failed national and non-national banks during the National Banking era is shown in Table 2. Suspensions — temporary closures of banks unable to meet demand for their liabilities — were even higher during this period.

Table 2: Bank Failures, 1865-1913

Number of Failed Banks Assets of Failed Banks ($millions)
National Banks Other Banks National Banks Other banks
1865 1 5 0.1 0.2
1866 2 5 1.8 1.2
1867 7 3 4.9 0.2
1868 3 7 0.5 0.2
1869 2 6 0.7 0.1
1870 0 1 0.0 0.0
1871 0 7 0.0 2.3
1872 6 10 5.2 2.1
1873 11 33 8.8 4.6
1874 3 40 0.6 4.1
1875 5 14 3.2 9.2
1876 9 37 2.2 7.3
1877 10 63 7.3 13.1
1878 14 70 6.9 26.0
1879 8 20 2.6 5.1
1880 3 10 1.0 1.6
1881 0 9 0.0 0.6
1882 3 19 6.0 2.8
1883 2 27 0.9 2.8
1884 11 54 7.9 12.9
1885 4 32 4.7 3.0
1886 8 13 1.6 1.3
1887 8 19 6.9 2.9
1888 8 17 6.9 2.8
1889 8 15 0.8 1.3
1890 9 30 2.0 10.7
1891 25 44 9.0 7.2
1892 17 27 15.1 2.7
1893 65 261 27.6 54.8
1894 21 71 7.4 8.0
1895 36 115 12.1 11.3
1896 27 78 12.0 10.2
1897 38 122 29.1 17.9
1898 7 53 4.6 4.5
1899 12 26 2.3 7.8
1900 6 32 11.6 7.7
1901 11 56 8.1 6.4
1902 2 43 0.5 7.3
1903 12 26 6.8 2.2
1904 20 102 7.7 24.3
1905 22 57 13.7 7.0
1906 8 37 2.2 6.6
1907 7 34 5.4 13.0
1908 24 132 30.8 177.1
1909 9 60 3.4 15.8
1910 6 28 2.6 14.5
1911 3 56 1.1 14.0
1912 8 55 5.0 7.8
1913 6 40 7.6 6.2

Source: U.S. Department of the Treasury. Annual Report of the Comptroller of the Currency (1931), pp. 6, 8.

The largest number of failures occurred in the years following the financial crisis of 1893. The number and assets of national and non-national bank failures remained high for four years following the crisis, a period which coincided with the free silver agitation of the mid-1890s, before returning to pre-1893 levels. Other crises were also accompanied by an increase in the number and assets of bank failures. The earliest peak during the national banking era accompanied the onset of the crisis of 1873. Failures subsequently fell, but rose again in the trough of the depression that followed the 1873 crisis. The panic of 1884 saw a slight increase in failures, while the financial stringency of 1890 was followed by a more substantial increase. Failures peaked again following several minor panics around the turn of the century and again at the time of the crisis of 1907.

Among the alleged causes of crises during the national banking era were that the money supply was not sufficiently elastic to allow for seasonal and other stresses on the money market and the fact that reserves were pyramided. That is, under the National Banking Acts, a portion of banks’ required reserves could be held in national banks in larger cities (“reserve city banks”). Reserve city banks could, in turn, hold a portion of their required reserves in “central reserve city banks,” national banks in New York, Chicago, and St. Louis. In practice, this led to the build-up of reserve balances in New York City. Increased demands for funds in the interior of the country during the autumn harvest season led to substantial outflows of funds from New York, which contributed to tight money market conditions and, sometimes, to panics (Miron 1986).[5]

Attempted Remedies for Banking Crises

Causes of Bank Failures

Bank failures occur when banks are unable to meet the demands of their creditors (in earlier times these were note holders; later on, they were more often depositors). Banks typically do not hold 100 percent of their liabilities in reserves, instead holding some fraction of demandable liabilities in reserves: as long as the flows of funds into and out of the bank are more or less in balance, the bank is in little danger of failing. A withdrawal of deposits that exceeds the bank’s reserves, however, can lead to the banks’ temporary suspension (inability to pay) or, if protracted, failure. The surge in withdrawals can have a variety of causes including depositor concern about the bank’s solvency (ability to pay depositors), as well as worries about other banks’ solvency that lead to a general distrust of all banks.[6]

Clearinghouses

Bankers and policy makers attempted a number of different responses to banking panics during the National Banking era. One method of dealing with panics was for the bankers of a city to pool their resources, through the local bankers’ clearinghouse and to jointly guarantee the payment of every member banks’ liabilities (see Gorton (1985a, b)).

Deposit Insurance

Another method of coping with panics was deposit insurance. Eight states (Oklahoma, Kansas, Nebraska, Texas, Mississippi, South Dakota, North Dakota, and Washington) adopted deposit insurance systems between 1908 and 1917 (six other states had adopted some form of deposit insurance in the nineteenth century: New York, Vermont, Indiana, Michigan, Ohio, and Iowa). These systems were not particularly successful, in part because they lacked diversification: because these systems operated statewide, when a panic fell full force on a state, deposit insurance system did not have adequate resources to handle each and every failure. When the agricultural depression of the 1920s hit, a number of these systems failed (Federal Deposit Insurance Corporation 1988).

Double Liability

Another measure adopted to curtail bank risk-taking, and through risk-taking, bank failures, was double liability (Grossman 2001). Under double liability, shareholders who had invested in banks that failed were liable to lose not only the money they had invested, but could be called on by a bank’s receiver to contribute an additional amount equal to the par value of the shares (hence the term “double liability,” although clearly the loss to the shareholder need not have been double if the par and market values of shares were different). Other states instituted triple liability, where the receiver could call on twice the par value of shares owned. Still others had unlimited liability, while others had single, or regular limited, liability.[7] It was argued that banks with double liability would be more risk averse, since shareholders would be liable for a greater payment if the firm went bankrupt.

By 1870, multiple (i.e., double, triple, and unlimited) liability was already the rule for state banks in eighteen states, principally in the Midwest, New England, and Middle Atlantic regions, as well as for national banks. By 1900, multiple liability was the law for state banks in thirty-two states. By this time, the main pockets of single liability were in the south and west. By 1930, only four states had single liability.

Double liability appears to have been successful (Grossman 2001), at least during less-than-turbulent times. During the 1890-1930 period, state banks in states where banks were subject to double (or triple, or unlimited) liability typically undertook less risk than their counterparts in single (limited) liability states in normal years. However, in years in which bank failures were quite high, banks in multiple liability states appeared to take more risk than their limited liability counterparts. This may have resulted from the fact that legislators in more crisis-prone states were more likely to have already adopted double liability. Whatever its advantages or disadvantages, the Great Depression spelled the end of double liability: by 1941, virtually every state had repealed double liability for state-chartered banks.

The Crisis of 1907 and Founding of the Federal Reserve

The crisis of 1907, which had been brought under control by a coalition of trust companies and other chartered banks and clearing-house members led by J.P. Morgan, led to a reconsideration of the monetary system of the United States. Congress set up the National Monetary Commission (1908-12), which undertook a massive study of the history of banking and monetary arrangements in the United States and in other economically advanced countries.[8]

The eventual result of this investigation was the Federal Reserve Act (1913), which established the Federal Reserve System as the central bank of the US. Unlike other countries that had one central bank (e.g., Bank of England, Bank of France), the Federal Reserve Act provided for a system of between eight and twelve reserve banks (twelve were eventually established under the act, although during debate over the act, some had called for as many as one reserve bank per state). This provision, like the rejection of the first two attempts at a central bank, resulted, in part, from American’s antipathy towards centralized monetary authority. The Federal Reserve was established to manage the monetary affairs of the country, to hold the reserves of banks and to regulate the money supply. At the time of its founding each of the reserve banks had a high degree of independence. As a result of the crises surrounding the Great Depression, Congress passed the Banking Act of 1935, which, among other things, centralized Federal Reserve power (including the power to engage in open market operations) in a Washington-based Board of Governors (and Federal Open Market Committee), relegating the heads of the individual reserve banks to a more consultative role in the operation of monetary policy.

The Goal of an “Elastic Currency”

The stated goals of the Federal Reserve Act were: ” . . . to furnish an elastic currency, to furnish the means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes.” Furnishing an “elastic currency” was important goal of the act, since none of the components of the money supply (gold and silver certificates, national bank notes) were able to expand or contract particularly rapidly. The inelasticity of the money supply, along with the seasonal fluctuations in money demand had led to a number of the panics of the National Banking era. These panic-inducing seasonal fluctuations resulted from the large flows of money out of New York and other money centers to the interior of the country to pay for the newly harvested crops. If monetary conditions were already tight before the drain of funds to the nation’s interior, the autumnal movement of funds could — and did –precipitate panics.[9]

Growth of the Bankers’ Acceptance Market

The act also fostered the growth of the bankers’ acceptance market. Bankers’ acceptances were essentially short-dated IOUs, issued by banks on behalf of clients that were importing (or otherwise purchasing) goods. These acceptances were sent to the seller who could hold on to them until they matured, and receive the face value of the acceptance, or could discount them, that is, receive the face value minus interest charges. By allowing the Federal Reserve to rediscount commercial paper, the act facilitated the growth of this short-term money market (Warburg 1930, Broz 1997, and Federal Reserve Bank of New York 1998). In the 1920s, the various Federal Reserve banks began making large-scale purchases of US Treasury obligations, marking the beginnings of Federal Reserve open market operations.[10]

The Federal Reserve and State Banking

The establishment of the Federal Reserve did not end the competition between the state and national banking systems. While national banks were required to be members of the new Federal Reserve System, state banks could also become members of the system on equal terms. Further, the Federal Reserve Act, bolstered by the Act of June 21, 1917, ensured that state banks could become member banks without losing any competitive advantages they might hold over national banks. Depending upon the state, state banking law sometimes gave state banks advantages in the areas of branching,[11] trust operations,[12] interlocking managements, loan and investment powers,[13] safe deposit operations, and the arrangement of mergers.[14] Where state banking laws were especially liberal, banks had an incentive to give up their national bank charter and seek admission to the Federal Reserve System as a state member bank.

McFadden Act

The McFadden Act (1927) addressed some of the competitive inequalities between state and national banks. It gave national banks charters of indeterminate length, allowing them to compete with state banks for trust business. It expanded the range of permissible investments, including real estate investment and allowed investment in the stock of safe deposit companies. The Act greatly restricted the ability of member banks — whether state or nationally chartered — from opening or maintaining out-of-town branches.

The Great Depression: Panic and Reform

The Great Depression was the longest, most severe economic downturn in the history of the United States.[15] The banking panics of 1930, 1931, and 1933 were the most severe banking disruption ever to hit the United States, with more than one quarter of all banks closing. Data on the number of bank suspensions during this period is presented in Table 3.

Table 3: Bank Suspensions, 1921-33

Number of Bank Suspensions
All Banks National Banks
1921 505 52
1922 367 49
1923 646 90
1924 775 122
1925 618 118
1926 976 123
1927 669 91
1928 499 57
1929 659 64
1930 1352 161
1931 2294 409
1932 1456 276
1933 5190 1475

Source: Bremer (1935).

Note: 1933 figures include 4507 non-licensed banks (1400 non-licensed national banks). Non-licensed banks consist of banks operating on a restricted basis or not in operation, but not in liquidation or receivership.

The first banking panic erupted in October 1930. According to Friedman and Schwartz (1963, pp. 308-309), it began with failures in Missouri, Indiana, Illinois, Iowa, Arkansas, and North Carolina and quickly spread to other areas of the country. Friedman and Schwartz report that 256 banks with $180 million of deposits failed in November 1930, while 352 banks with over $370 million of deposits failed in the following month (the largest of which was the Bank of United States which failed on December 11 with over $200 million of deposits). The second banking panic began in March of 1931 and continued into the summer.[16] The third and final panic began at the end of 1932 and persisted into March of 1933. During the early months of 1933, a number of states declared banking holidays, allowing banks to close their doors and therefore freeing them from the requirement to redeem deposits. By the time President Franklin Delano Roosevelt was inaugurated on March 4, 1933, state-declared banking holidays were widespread. The following day, the president declared a national banking holiday.

Beginning on March 13, the Secretary of the Treasury began granting licenses to banks to reopen for business.

Federal Deposit Insurance

The crises led to the implementation of several major reforms in banking. Among the most important of these was the introduction of federal deposit insurance under the Banking (Glass-Steagall) Act of 1933. Originally an explicitly temporary program, the Act established the Federal Deposit Insurance Corporation (the FDIC was made permanent by the Banking Act of 1935); insurance became effective January 1, 1934. Member banks of the Federal Reserve (which included all national banks) were required to join FDIC. Within six months, 14,000 out of 15,348 commercial banks, representing 97 percent of bank deposits had subscribed to federal deposit insurance (Friedman and Schwartz, 1963, 436-437).[17] Coverage under the initial act was limited to a maximum of $2500 of deposits for each depositor. Table 4 documents the increase in the limit from the act’s inception until 1980, when it reached its current $100,000 level.

Table 4: FDIC Insurance Limit

1934 (January) $2500
1934 (July) $5000
1950 $10,000
1966 $15,000
1969 $20,000
1974 $40,000
1980 $100,000
Source: http://www.fdic.gov/

Additional Provisions of the Glass-Steagall Act

An important goal of the New Deal reforms was to enhance the stability of the banking system. Because the involvement of commercial banks in securities underwriting was seen as having contributed to banking instability, the Glass-Steagall Act of 1933 forced the separation of commercial and investment banking.[18] Additionally, the Acts (1933 for member banks, 1935 for other insured banks) established Regulation Q, which forbade banks from paying interest on demand deposits (i.e., checking accounts) and established limits on interest rates paid to time deposits. It was argued that paying interest on demand deposits introduced unhealthy competition.

Recent Responses to New Deal Banking Laws

In a sense, contemporary debates on banking policy stem largely from the reforms of the post-Depression era. Although several of the reforms introduced in the wake of the 1931-33 crisis have survived into the twenty-first century, almost all of them have been subject to intense scrutiny in the last two decades. For example, several court decisions, along with the Financial Services Modernization Act (Gramm-Leach-Bliley) of 1999, have blurred the previously strict separation between different financial service industries (particularly, although not limited to commercial and investment banking).

FSLIC

The Savings and Loan crisis of the 1980s, resulting from a combination of deposit insurance-induced moral hazard and deregulation, led to the dismantling of the Depression-era Federal Savings and Loan Insurance Corporation (FSLIC) and the transfer of Savings and Loan insurance to the Federal Deposit Insurance Corporation.

Further Reading

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in Propagation of the Great Depression.” American Economic Review 73 (1983): 257-76.

Bordo, Michael D., Claudia Goldin, and Eugene N. White, editors. The Defining Moment: The Great Depression and the American Economy in the Twentieth Century. Chicago: University of Chicago Press, 1998.

Bremer, C. D. American Bank Failures. New York: Columbia University Press, 1935.

Broz, J. Lawrence. The International Origins of the Federal Reserve System. Ithaca: Cornell University Press, 1997.

Cagan, Phillip. “The First Fifty Years of the National Banking System: An Historical Appraisal.” In Banking and Monetary Studies, edited by Deane Carson, 15-42. Homewood: Richard D. Irwin, 1963.

Cagan, Phillip. The Determinants and Effects of Changes in the Stock of Money. New York: National Bureau of Economic Research, 1065.

Calomiris, Charles W. and Gorton, Gary. “The Origins of Banking Panics: Models, Facts, and Bank Regulation.” In Financial Markets and Financial Crises, edited by Glenn R. Hubbard, 109-73. Chicago: University of Chicago Press, 1991.

Davis, Lance. “The Investment Market, 1870-1914: The Evolution of a National Market.” Journal of Economic History 25 (1965): 355-399.

Dewald, William G. “ The National Monetary Commission: A Look Back.”

Journal of Money, Credit and Banking 4 (1972): 930-956.

Eichengreen, Barry. “Mortgage Interest Rates in the Populist Era.” American Economic Review 74 (1984): 995-1015.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939, Oxford: Oxford University Press, 1992.

Federal Deposit Insurance Corporation. “A Brief History of Deposit Insurance in the United States.” Washington: FDIC, 1998. http://www.fdic.gov/bank/historical/brief/brhist.pdf

Federal Reserve. The Federal Reserve: Purposes and Functions. Washington: Federal Reserve Board, 1994. http://www.federalreserve.gov/pf/pdf/frspurp.pdf

Federal Reserve Bank of New York. U.S. Monetary Policy and Financial Markets.

New York, 1998. http://www.ny.frb.org/pihome/addpub/monpol/chapter2.pdf

Friedman, Milton and Anna J. Schawtz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodhart, C.A.E. The New York Money Market and the Finance of Trade, 1900-1913. Cambridge: Harvard University Press, 1969.

Gorton, Gary. “Bank Suspensions of Convertibility.” Journal of Monetary Economics 15 (1985a): 177-193.

Gorton, Gary. “Clearing Houses and the Origin of Central Banking in the United States.” Journal of Economic History 45 (1985b): 277-283.

Grossman, Richard S. “Deposit Insurance, Regulation, Moral Hazard in the Thrift Industry: Evidence from the 1930s.” American Economic Review 82 (1992): 800-821.

Grossman, Richard S. “The Macroeconomic Consequences of Bank Failures under the National Banking System.” Explorations in Economic History 30 (1993): 294-320.

Grossman, Richard S. “The Shoe That Didn’t Drop: Explaining Banking Stability during the Great Depression.” Journal of Economic History 54, no. 3 (1994): 654-82.

Grossman, Richard S. “Double Liability and Bank Risk-Taking.” Journal of Money, Credit, and Banking 33 (2001): 143-159.

James, John A. “The Conundrum of the Low Issue of National Bank Notes.” Journal of Political Economy 84 (1976a): 359-67.

James, John A. “The Development of the National Money Market, 1893-1911.” Journal of Economic History 36 (1976b): 878-97.

Kent, Raymond P. “Dual Banking between the Two Wars.” In Banking and Monetary Studies, edited by Deane Carson, 43-63. Homewood: Richard D. Irwin, 1963.

Kindleberger, Charles P. Manias, Panics, and Crashes: A History of Financial Crises. New York: Basic Books, 1978.

Krooss, Herman E., editor. Documentary History of Banking and Currency in the United States. New York: Chelsea House Publishers, 1969.

Minsky, Hyman P. Can ‘It” Happen Again? Essays on Instability and Finance. Armonk, NY: M.E. Sharpe, 1982.

Miron , Jeffrey A. “Financial Panics, the Seasonality of the Nominal Interest Rate, and the Founding of the Fed.” American Economic Review 76 (1986): 125-38.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard, 69-108. Chicago: University of Chicago Press, 1991.

Rockoff, Hugh. The Free Banking Era: A Reexamination. New York: Arno Press, 1975.

Rockoff, Hugh. “Banking and Finance, 1789-1914.” In The Cambridge Economic History of the United States. Volume 2. The Long Nineteenth Century, edited by Stanley L Engerman and Robert E. Gallman, 643-84. New York: Cambridge University Press, 2000.

Sprague, O. M. W. History of Crises under the National Banking System. Washington, DC: Government Printing Office, 1910.

Sylla, Richard. “Federal Policy, Banking Market Structure, and Capital Mobilization in the United States, 1863-1913.” Journal of Economic History 29 (1969): 657-686.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge: MIT Press, 1989.

Warburg,. Paul M. The Federal Reserve System: Its Origin and Growth: Reflections and Recollections, 2 volumes. New York: Macmillan, 1930.

White, Eugene N. The Regulation and Reform of American Banking, 1900-1929. Princeton: Princeton University Press, 1983.

White, Eugene N. “Before the Glass-Steagall Act: An Analysis of the Investment Banking Activities of National Banks.” Explorations in Economic History 23 (1986) 33-55.

White, Eugene N. “Banking and Finance in the Twentieth Century.” In The Cambridge Economic History of the United States. Volume 3. The Twentieth Century, edited by Stanley L.Engerman and Robert E. Gallman, 743-802. New York: Cambridge University Press, 2000.

Wicker, Elmus. The Banking Panics of the Great Depression. New York: Cambridge University Press, 1996.

Wicker, Elmus. Banking Panics of the Gilded Age. New York: Cambridge University Press, 2000.


[1] The two exceptions were the First and Second Banks of the United States. The First Bank, which was chartered by Congress at the urging of Alexander Hamilton, in 1791, was granted a 20-year charter, which Congress allowed to expire in 1811. The Second Bank was chartered just five years after the expiration of the first, but Andrew Jackson vetoed the charter renewal in 1832 and the bank ceased to operate with a national charter when its 20-year charter expired in 1836. The US remained without a central bank until the founding of the Federal Reserve in 1914. Even then, the Fed was not founded as one central bank, but as a collection of twelve regional reserve banks. American suspicion of concentrated financial power has not been limited to central banking: in contrast to the rest of the industrialized world, twentieth century US banking was characterized by large numbers of comparatively small, unbranched banks.

[2] The relationship between the enactment of the National Bank Acts and the Civil War was perhaps even deeper. Hugh Rockoff suggested the following to me: “There were western states where the banking system was in trouble because the note issue was based on southern bonds, and people in those states were looking to the national government to do something. There were also conservative politicians who were afraid that they wouldn’t be able to get rid of the greenback (a perfectly uniform [government issued wartime] currency) if there wasn’t a private alternative that also promised uniformity…. It has even been claimed that by setting up a national system, banks in the South were undermined — as a war measure.”

[3] Eichengreen (1984) argues that regional mortgage interest rate differentials resulted from differences in risk.

[4] There is some debate over the direction of causality between banking crises and economic downturns. According to monetarists Friedman and Schwartz (1963) and Cagan (1965), the monetary contraction associated with bank failures magnifies real economic downturns. Bernanke (1983) argues that bank failures raise the cost of credit intermediation and therefore have an effect on the real economy through non-monetary channels. An alternative view, articulated by Sprague (1910), Fisher (1933), Temin (1976), Minsky (1982), and Kindleberger (1978), maintains that bank failures and monetary contraction are primarily a consequence, rather than a cause, of sluggishness in the real economy which originates in non-monetary sources. See Grossman (1993) for a summary of this literature.

[5] See Calomiris and Gorton (1991) for an alternative view.

[6] See Mishkin (1991) on asymmetric information and financial crises.

[7] Still other states had “voluntary liability,” whereby each bank could choose single or double liability.

[8] See Dewald (1972) on the National Monetary Commission.

[9] Miron (1986) demonstrates the decline in the seasonality of interest rates following the founding of the Fed.

[10] Other Fed activities included check clearing.

[11] According to Kent (1963, pp. 48), starting in 1922 the Comptroller allowed national banks to open “offices” to receive deposits, cash checks, and receive applications for loans in head office cities of states that allowed state-chartered banks to establish branches.

[12] Prior to 1922, national bank charters had lives of only 20 years. This severely limited their ability to compete with state banks in the trust business. (Kent 1963, p. 49)

[13] National banks were subject to more severe limitations on lending than most state banks. These restrictions included a limit on the amount that could be loaned to one borrower as well as limitations on real estate lending. (Kent 1963, pp. 50-51)

[14] Although the Bank Consolidation Act of 1918 provided for the merger of two or more national banks, it made no provision for the merger of a state and national bank. Kent (1963, p. 51).

[15] References touching on banking and financial aspects of the Great Depression in the United States include Friedman and Schwartz (1963), Temin (1976, 1989), Kindleberger (1978), Bernanke (1983), Eichangreen (1992), and Bordo, Goldin, and White (1998).

[16] During this period, the failures of the Credit-Anstalt, Austria’s largest bank, and the Darmstädter und Nationalbank (Danat Bank), a large German bank, inaugurated the beginning of financial crisis in Europe. The European financial crisis led to Britain’s suspension of the gold standard in September 1931. See Grossman (1994) on the European banking crisis of 1931. The best source on the gold standard in the interwar years is Eichengreen (1992).

[17] Interestingly, federal deposit insurance was made optional for savings and loan institutions at about the same time. The majority of S&L’s did not elect to adopt deposit insurance until after 1950. See Grossman (1992).

[18] See, however, White (1986) for

Citation: Grossman, Richard. “US Banking History, Civil War to World War II”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/us-banking-history-civil-war-to-world-war-ii/

Antebellum Banking in the United States

Howard Bodenhorn, Lafayette College

The first legitimate commercial bank in the United States was the Bank of North America founded in 1781. Encouraged by Alexander Hamilton, Robert Morris persuaded the Continental Congress to charter the bank, which loaned to the cash-strapped Revolutionary government as well as private citizens, mostly Philadelphia merchants. The possibilities of commercial banking had been widely recognized by many colonists, but British law forbade the establishment of commercial, limited-liability banks in the colonies. Given that many of the colonists’ grievances against Parliament centered on economic and monetary issues, it is not surprising that one of the earliest acts of the Continental Congress was the establishment of a bank.

The introduction of banking to the U.S. was viewed as an important first step in forming an independent nation because banks supplied a medium of exchange (banknotes1 and deposits) in an economy perpetually strangled by shortages of specie money and credit, because they animated industry, and because they fostered wealth creation and promoted well-being. In the last case, contemporaries typically viewed banks as an integral part of a wider system of government-sponsored commercial infrastructure. Like schools, bridges, road, canals, river clearing and harbor improvements, the benefits of banks were expected to accrue to everyone even if dividends accrued only to shareholders.

Financial Sector Growth

By 1800 each major U.S. port city had at least one commercial bank serving the local mercantile community. As city banks proved themselves, banking spread into smaller cities and towns and expanded their clientele. Although most banks specialized in mercantile lending, others served artisans and farmers. In 1820 there were 327 commercial banks and several mutual savings banks that promoted thrift among the poor. Thus, at the onset of the antebellum period (defined here as the period between 1820 and 1860), urban residents were familiar with the intermediary function of banks and used bank-supplied currencies (deposits and banknotes) for most transactions. Table 1 reports the number of banks and the value of loans outstanding at year end between 1820 and 1860. During the era, the number of banks increased from 327 to 1,562 and total loans increased from just over $55.1 million to $691.9 million. Bank-supplied credit in the U.S. economy increased at a remarkable annual average rate of 6.3 percent. Growth in the financial sector, then outpaced growth in aggregate economic activity. Nominal gross domestic product increased an average annual rate of about 4.3 percent over the same interval. This essay discusses how regional regulatory structures evolved as the banking sector grew and radiated out from northeastern cities to the hinterlands.

Table 1
Number of Banks and Total Loans, 1820-1860

Year Banks Loans ($ millions)
1820 327 55.1
1821 273 71.9
1822 267 56.0
1823 274 75.9
1824 300 73.8
1825 330 88.7
1826 331 104.8
1827 333 90.5
1828 355 100.3
1829 369 103.0
1830 381 115.3
1831 424 149.0
1832 464 152.5
1833 517 222.9
1834 506 324.1
1835 704 365.1
1836 713 457.5
1837 788 525.1
1838 829 485.6
1839 840 492.3
1840 901 462.9
1841 784 386.5
1842 692 324.0
1843 691 254.5
1844 696 264.9
1845 707 288.6
1846 707 312.1
1847 715 310.3
1848 751 344.5
1849 782 332.3
1850 824 364.2
1851 879 413.8
1852 913 429.8
1853 750 408.9
1854 1208 557.4
1855 1307 576.1
1856 1398 634.2
1857 1416 684.5
1858 1422 583.2
1859 1476 657.2
1860 1562 691.9

Sources: Fenstermaker (1965); U.S. Comptroller of the Currency (1931).

Adaptability

As important as early American banks were in the process of capital accumulation, perhaps their most notable feature was their adaptability. Kuznets (1958) argues that one measure of the financial sector’s value is how and to what extent it evolves with changing economic conditions. Put in place to perform certain functions under one set of economic circumstances, how did it alter its behavior and service the needs of borrowers as circumstances changed. One benefit of the federalist U.S. political system was that states were given the freedom to establish systems reflecting local needs and preferences. While the political structure deserves credit in promoting regional adaptations, North (1994) credits the adaptability of America’s formal rules and informal constraints that rewarded adventurism in the economic, as well as the noneconomic, sphere. Differences in geography, climate, crop mix, manufacturing activity, population density and a host of other variables were reflected in different state banking systems. Rhode Island’s banks bore little resemblance to those in far away Louisiana or Missouri, or even those in neighboring Connecticut. Each state’s banks took a different form, but their purpose was the same; namely, to provide the state’s citizens with monetary and intermediary services and to promote the general economic welfare. This section provides a sketch of regional differences. A more detailed discussion can be found in Bodenhorn (2002).

State Banking in New England

New England’s banks most resemble the common conception of the antebellum bank. They were relatively small, unit banks; their stock was closely held; they granted loans to local farmers, merchants and artisans with whom the bank’s managers had more than a passing familiarity; and the state took little direct interest in their daily operations.

Of the banking systems put in place in the antebellum era, New England’s is typically viewed as the most stable and conservative. Friedman and Schwartz (1986) attribute their stability to an Old World concern with business reputations, familial ties, and personal legacies. New England was long settled, its society well established, and its business community mature and respected throughout the Atlantic trading network. Wealthy businessmen and bankers with strong ties to the community — like the Browns of Providence or the Bowdoins of Boston — emphasized stability not just because doing so benefited and reflected well on them, but because they realized that bad banking was bad for everyone’s business.

Besides their reputation for soundness, the two defining characteristics of New England’s early banks were their insider nature and their small size. The typical New England bank was small compared to banks in other regions. Table 2 shows that in 1820 the average Massachusetts country bank was about the same size as a Pennsylvania country bank, but both were only about half the size of a Virginia bank. A Rhode Island bank was about one-third the size of a Massachusetts or Pennsylvania bank and a mere one-sixth as large as Virginia’s banks. By 1850 the average Massachusetts bank declined relatively, operating on about two-thirds the paid-in capital of a Pennsylvania country bank. Rhode Island’s banks also shrank relative to Pennsylvania’s and were tiny compared to the large branch banks in the South and West.

Table 2
Average Bank Size by Capital and Lending in 1820 and 1850 Selected States and Cities
(in $ thousands)

1820
Capital
Loans 1850 Capital Loans
Massachusetts $374.5 $480.4 $293.5 $494.0
except Boston 176.6 230.8 170.3 281.9
Rhode Island 95.7 103.2 186.0 246.2
except Providence 60.6 72.0 79.5 108.5
New York na na 246.8 516.3
except NYC na na 126.7 240.1
Pennsylvania 221.8 262.9 340.2 674.6
except Philadelphia 162.6 195.2 246.0 420.7
Virginia1,2 351.5 340.0 270.3 504.5
South Carolina2 na na 938.5 1,471.5
Kentucky2 na na 439.4 727.3

Notes: 1 Virginia figures for 1822. 2 Figures represent branch averages.

Source: Bodenhorn (2002).

Explanations for New England Banks’ Relatively Small Size

Several explanations have been offered for the relatively small size of New England’s banks. Contemporaries attributed it to the New England states’ propensity to tax bank capital, which was thought to work to the detriment of large banks. They argued that large banks circulated fewer banknotes per dollar of capital. The result was a progressive tax that fell disproportionately on large banks. Data compiled from Massachusetts’s bank reports suggest that large banks were not disadvantaged by the capital tax. It was a fact, as contemporaries believed, that large banks paid higher taxes per dollar of circulating banknotes, but a potentially better benchmark is the tax to loan ratio because large banks made more use of deposits than small banks. The tax to loan ratio was remarkably constant across both bank size and time, averaging just 0.6 percent between 1834 and 1855. Moreover, there is evidence of constant to modestly increasing returns to scale in New England banking. Large banks were generally at least as profitable as small banks in all years between 1834 and 1860, and slightly more so in many.

Lamoreaux (1993) offers a different explanation for the modest size of the region’s banks. New England’s banks, she argues, were not impersonal financial intermediaries. Rather, they acted as the financial arms of extended kinship trading networks. Throughout the antebellum era banks catered to insiders: directors, officers, shareholders, or business partners and kin of directors, officers, shareholders and business partners. Such preferences toward insiders represented the perpetuation of the eighteenth-century custom of pooling capital to finance family enterprises. In the nineteenth century the practice continued under corporate auspices. The corporate form, in fact, facilitated raising capital in greater amounts than the family unit could raise on its own. But because the banks kept their loans within a relatively small circle of business connections, it was not until the late nineteenth century that bank size increased.2

Once the kinship orientation of the region’s banks was established it perpetuated itself. When outsiders could not obtain loans from existing insider organizations, they formed their own insider bank. In doing so the promoters assured themselves of a steady supply of credit and created engines of economic mobility for kinship networks formerly closed off from many sources of credit. State legislatures accommodated the practice through their liberal chartering policies. By 1860, Rhode Island had 91 banks, Maine had 68, New Hampshire 51, Vermont 44, Connecticut 74 and Massachusetts 178.

The Suffolk System

One of the most commented on characteristic of New England’s banking system was its unique regional banknote redemption and clearing mechanism. Established by the Suffolk Bank of Boston in the early 1820s, the system became known as the Suffolk System. With so many banks in New England, each issuing it own form of currency, it was sometimes difficult for merchants, farmers, artisans, and even other bankers, to discriminate between real and bogus banknotes, or to discriminate between good and bad bankers. Moreover, the rural-urban terms of trade pulled most banknotes toward the region’s port cities. Because country merchants and farmers were typically indebted to city merchants, country banknotes tended to flow toward the cities, Boston more so than any other. By the second decade of the nineteenth century, country banknotes became a constant irritant for city bankers. City bankers believed that country issues displaced Boston banknotes in local transactions. More irritating though was the constant demand by the city banks’ customers to accept country banknotes on deposit, which placed the burden of interbank clearing on the city banks.3

In 1803 the city banks embarked on a first attempt to deal with country banknotes. They joined together, bought up a large quantity of country banknotes, and returned them to the country banks for redemption into specie. This effort to reduce country banknote circulation encountered so many obstacles that it was quickly abandoned. Several other schemes were hatched in the next two decades, but none proved any more successful than the 1803 plan.

The Suffolk Bank was chartered in 1818 and within a year embarked on a novel scheme to deal with the influx of country banknotes. The Suffolk sponsored a consortium of Boston bank in which each member appointed the Suffolk as its lone agent in the collection and redemption of country banknotes. In addition, each city bank contributed to a fund used to purchase and redeem country banknotes. When the Suffolk collected a large quantity of a country bank’s notes, it presented them for immediate redemption with an ultimatum: Join in a regular and organized redemption system or be subject to further unannounced redemption calls.4 Country banks objected to the Suffolk’s proposal, because it required them to keep noninterest-earning assets on deposit with the Suffolk in amounts equal to their average weekly redemptions at the city banks. Most country banks initially refused to join the redemption network, but after the Suffolk made good on a few redemption threats, the system achieved near universal membership.

Early interpretations of the Suffolk system, like those of Redlich (1949) and Hammond (1957), portray the Suffolk as a proto-central bank, which acted as a restraining influence that exercised some control over the region’s banking system and money supply. Recent studies are less quick to pronounce the Suffolk a successful experiment in early central banking. Mullineaux (1987) argues that the Suffolk’s redemption system was actually self-defeating. Instead of making country banknotes less desirable in Boston, the fact that they became readily redeemable there made them perfect substitutes for banknotes issued by Boston’s prestigious banks. This policy made country banknotes more desirable, which made it more, not less, difficult for Boston’s banks to keep their own notes in circulation.

Fenstermaker and Filer (1986) also contest the long-held view that the Suffolk exercised control over the region’s money supply (banknotes and deposits). Indeed, the Suffolk’s system was self-defeating in this regard as well. By increasing confidence in the value of a randomly encountered banknote, people were willing to hold increases in banknotes issues. In an interesting twist on the traditional interpretation, a possible outcome of the Suffolk system is that New England may have grown increasingly financial backward as a direct result of the region’s unique clearing system. Because banknotes were viewed as relatively safe and easily redeemed, the next big financial innovation — deposit banking — in New England lagged far behind other regions. With such wide acceptance of banknotes, there was no reason for banks to encourage the use of deposits and little reason for consumers to switch over.

Summary: New England Banks

New England’s banking system can be summarized as follows: Small unit banks predominated; many banks catered to small groups of capitalists bound by personal and familial ties; banking was becoming increasingly interconnected with other lines of business, such as insurance, shipping and manufacturing; the state took little direct interest in the daily operations of the banks and its supervisory role amounted to little more than a demand that every bank submit an unaudited balance sheet at year’s end; and that the Suffolk developed an interbank clearing system that facilitated the use of banknotes throughout the region, but had little effective control over the region’s money supply.

Banking in the Middle Atlantic Region

Pennsylvania

After 1810 or so, many bank charters were granted in New England, but not because of the presumption that the bank would promote the commonweal. Charters were granted for the personal gain of the promoter and the shareholders and in proportion to the personal, political and economic influence of the bank’s founders. No New England state took a significant financial stake in its banks. In both respects, New England differed markedly from states in other regions. From the beginning of state-chartered commercial banking in Pennsylvania, the state took a direct interest in the operations and profits of its banks. The Bank of North America was the obvious case: chartered to provide support to the colonial belligerents and the fledgling nation. Because the bank was popularly perceived to be dominated by Philadelphia’s Federalist merchants, who rarely loaned to outsiders, support for the bank waned.5 After a pitched political battle in which the Bank of North America’s charter was revoked and reinstated, the legislature chartered the Bank of Pennsylvania in 1793. As its name implies, this bank became the financial arm of the state. Pennsylvania subscribed $1 million of the bank’s capital, giving it the right to appoint six of thirteen directors and a $500,000 line of credit. The bank benefited by becoming the state’s fiscal agent, which guaranteed a constant inflow of deposits from regular treasury operations as well as western land sales.

By 1803 the demand for loans outstripped the existing banks’ supply and a plan for a new bank, the Philadelphia Bank, was hatched and its promoters petitioned the legislature for a charter. The existing banks lobbied against the charter, and nearly sank the new bank’s chances until it established a precedent that lasted throughout the antebellum era. Its promoters bribed the legislature with a payment of $135,000 in return for the charter, handed over one-sixth of its shares, and opened a line of credit for the state.

Between 1803 and 1814, the only other bank chartered in Pennsylvania was the Farmers and Mechanics Bank of Philadelphia, which established a second substantive precedent that persisted throughout the era. Existing banks followed a strict real-bills lending policy, restricting lending to merchants at very short terms of 30 to 90 days.6 Their adherence to a real-bills philosophy left a growing community of artisans, manufacturers and farmers on the outside looking in. The Farmers and Mechanics Bank was chartered to serve excluded groups. At least seven of its thirteen directors had to be farmers, artisans or manufacturers and the bank was required to lend the equivalent of 10 percent of its capital to farmers on mortgage for at least one year. In later years, banks were established to provide services to even more narrowly defined groups. Within a decade or two, most substantial port cities had banks with names like Merchants Bank, Planters Bank, Farmers Bank, and Mechanics Bank. By 1860 it was common to find banks with names like Leather Manufacturers Bank, Grocers Bank, Drovers Bank, and Importers Bank. Indeed, the Emigrant Savings Bank in New York City served Irish immigrants almost exclusively. In the other instances, it is not known how much of a bank’s lending was directed toward the occupational group included in its name. The adoption of such names may have been marketing ploys as much as mission statements. Only further research will reveal the answer.

New York

State-chartered banking in New York arrived less auspiciously than it had in Philadelphia or Boston. The Bank of New York opened in 1784, but operated without a charter and in open violation of state law until 1791 when the legislature finally sanctioned it. The city’s second bank obtained its charter surreptitiously. Alexander Hamilton was one of the driving forces behind the Bank of New York, and his long-time nemesis, Aaron Burr, was determined to establish a competing bank. Unable to get a charter from a Federalist legislature, Burr and his colleagues petitioned to incorporate a company to supply fresh water to the inhabitants of Manhattan Island. Burr tucked a clause into the charter of the Manhattan Company (the predecessor to today’s Chase Manhattan Bank) granting the water company the right to employ any excess capital in financial transactions. Once chartered, the company’s directors announced that $500,000 of its capital would be invested in banking.7 Thereafter, banking grew more quickly in New York than in Philadelphia, so that by 1812 New York had seven banks compared to the three operating in Philadelphia.

Deposit Insurance

Despite its inauspicious banking beginnings, New York introduced two innovations that influenced American banking down to the present. The Safety Fund system, introduced in 1829, was the nation’s first experiment in bank liability insurance (similar to that provided by the Federal Deposit Insurance Corporation today). The 1829 act authorized the appointment of bank regulators charged with regular inspections of member banks. An equally novel aspect was that it established an insurance fund insuring holders of banknotes and deposits against loss from bank failure. Ultimately, the insurance fund was insufficient to protect all bank creditors from loss during the panic of 1837 when eleven failures in rapid succession all but bankrupted the insurance fund, which delayed noteholder and depositor recoveries for months, even years. Even though the Safety Fund failed to provide its promised protections, it was an important episode in the subsequent evolution of American banking. Several Midwestern states instituted deposit insurance in the early twentieth century, and the federal government adopted it after the banking panics in the 1930s resulted in the failure of thousands of banks in which millions of depositors lost money.

“Free Banking”

Although the Safety Fund was nearly bankrupted in the late 1830s, it continued to insure a number of banks up to the mid 1860s when it was finally closed. No new banks joined the Safety Fund system after 1838 with the introduction of free banking — New York’s second significant banking innovation. Free banking represented a compromise between those most concerned with the underlying safety and stability of the currency and those most concerned with competition and freeing the country’s entrepreneurs from unduly harsh and anticompetitive restraints. Under free banking, a prospective banker could start a bank anywhere he saw fit, provided he met a few regulatory requirements. Each free bank’s capital was invested in state or federal bonds that were turned over to the state’s treasurer. If a bank failed to redeem even a single note into specie, the treasurer initiated bankruptcy proceedings and banknote holders were reimbursed from the sale of the bonds.

Actually Michigan preempted New York’s claim to be the first free-banking state, but Michigan’s 1837 law was modeled closely after a bill then under debate in New York’s legislature. Ultimately, New York’s influence was profound in this as well, because free banking became one of the century’s most widely copied financial innovations. By 1860 eighteen states adopted free banking laws closely resembling New York’s law. Three other states introduced watered-down variants. Eventually, the post-Civil War system of national banking adopted many of the substantive provisions of New York’s 1838 act.

Both the Safety Fund system and free banking were attempts to protect society from losses resulting from bank failures and to entice people to hold financial assets. Banks and bank-supplied currency were novel developments in the hinterlands in the early nineteenth century and many rural inhabitants were skeptical about the value of small pieces of paper. They were more familiar with gold and silver. Getting them to exchange one for the other was a slow process, and one that relied heavily on trust. But trust was built slowly and destroyed quickly. The failure of a single bank could, in a week, destroy the confidence in a system built up over a decade. New York’s experiments were designed to mitigate, if not eliminate, the negative consequences of bank failures. New York’s Safety Fund, then, differed in the details but not in intent, from New England’s Suffolk system. Bankers and legislators in each region grappled with the difficult issue of protecting a fragile but vital sector of the economy. Each region responded to the problem differently. The South and West settled on yet another solution.

Banking in the South and West

One distinguishing characteristic of southern and western banks was their extensive branch networks. Pennsylvania provided for branch banking in the early nineteenth century and two banks jointly opened about ten branches. In both instances, however, the branches became a net liability. The Philadelphia Bank opened four branches in 1809 and by 1811 was forced to pass on its semi-annual dividends because losses at the branches offset profits at the Philadelphia office. At bottom, branch losses resulted from a combination of ineffective central office oversight and unrealistic expectations about the scale and scope of hinterland lending. Philadelphia’s bank directors instructed branch managers to invest in high-grade commercial paper or real bills. Rural banks found a limited number of such lending opportunities and quickly turned to mortgage-based lending. Many of these loans fell into arrears and were ultimately written when land sales faltered.

Branch Banking

Unlike Pennsylvania, where branch banking failed, branch banks throughout the South and West thrived. The Bank of Virginia, founded in 1804, was the first state-chartered branch bank and up to the Civil War branch banks served the state’s financial needs. Several small, independent banks were chartered in the 1850s, but they never threatened the dominance of Virginia’s “Big Six” banks. Virginia’s branch banks, unlike Pennsylvania’s, were profitable. In 1821, for example, the net return to capital at the Farmers Bank of Virginia’s home office in Richmond was 5.4 percent. Returns at its branches ranged from a low of 3 percent at Norfolk (which was consistently the low-profit branch) to 9 percent in Winchester. In 1835, the last year the bank reported separate branch statistics, net returns to capital at the Farmers Bank’s branches ranged from 2.9 and 11.7 percent, with an average of 7.9 percent.

The low profits at the Norfolk branch represent a net subsidy from the state’s banking sector to the political system, which was not immune to the same kind of infrastructure boosterism that erupted in New York, Pennsylvania, Maryland and elsewhere. In the immediate post-Revolutionary era, the value of exports shipped from Virginia’s ports (Norfolk and Alexandria) slightly exceeded the value shipped from Baltimore. In the 1790s the numbers turned sharply in Baltimore’s favor and Virginia entered the internal-improvements craze and the battle for western shipments. Banks represented the first phase of the state’s internal improvements plan in that many believed that Baltimore’s new-found advantage resulted from easier credit supplied by the city’s banks. If Norfolk, with one of the best natural harbors on the North American Atlantic coast, was to compete with other port cities, it needed banks and the state required three of the state’s Big Six branch banks to operate branches there. Despite its natural advantages, Norfolk never became an important entrepot and it probably had more bank capital than it required. This pattern was repeated elsewhere. Other states required their branch banks to serve markets such as Memphis, Louisville, Natchez and Mobile that might, with the proper infrastructure grow into important ports.

State Involvement and Intervention in Banking

The second distinguishing characteristic of southern and western banking was sweeping state involvement and intervention. Virginia, for example, interjected the state into the banking system by taking significant stakes in its first chartered banks (providing an implicit subsidy) and by requiring them, once they established themselves, to subsidize the state’s continuing internal improvements programs of the 1820s and 1830s. Indiana followed such a strategy. So, too, did Kentucky, Louisiana, Mississippi, Illinois, Kentucky, Tennessee and Georgia in different degrees. South Carolina followed a wholly different strategy. On one hand, it chartered several banks in which it took no financial interest. On the other, it chartered the Bank of the State of South Carolina, a bank wholly owned by the state and designed to lend to planters and farmers who complained constantly that the state’s existing banks served only the urban mercantile community. The state-owned bank eventually divided its lending between merchants, farmers and artisans and dominated South Carolina’s financial sector.

The 1820s and 1830s witnessed a deluge of new banks in the South and West, with a corresponding increase in state involvement. No state matched Louisiana’s breadth of involvement in the 1830s when it chartered three distinct types of banks: commercial banks that served merchants and manufacturers; improvement banks that financed various internal improvements projects; and property banks that extended long-term mortgage credit to planters and other property holders. Louisiana’s improvement banks included the New Orleans Canal and Banking Company that built a canal connecting Lake Ponchartrain to the Mississippi River. The Exchange and Banking Company and the New Orleans Improvement and Banking Company were required to build and operate hotels. The New Orleans Gas Light and Banking Company constructed and operated gas streetlights in New Orleans and five other cities. Finally, the Carrollton Railroad and Banking Company and the Atchafalaya Railroad and Banking Company were rail construction companies whose bank subsidiaries subsidized railroad construction.

“Commonwealth Ideal” and Inflationary Banking

Louisiana’s 1830s banking exuberance reflected what some historians label the “commonwealth ideal” of banking; that is, the promotion of the general welfare through the promotion of banks. Legislatures in the South and West, however, never demonstrated a greater commitment to the commonwealth ideal than during the tough times of the early 1820s. With the collapse of the post-war land boom in 1819, a political coalition of debt-strapped landowners lobbied legislatures throughout the region for relief and its focus was banking. Relief advocates lobbied for inflationary banking that would reduce the real burden of debts taken on during prior flush times.

Several western states responded to these calls and chartered state-subsidized and state-managed banks designed to reinflate their embattled economies. Chartered in 1821, the Bank of the Commonwealth of Kentucky loaned on mortgages at longer than customary periods and all Kentucky landowners were eligible for $1,000 loans. The loans allowed landowners to discharge their existing debts without being forced to liquidate their property at ruinously low prices. Although the bank’s notes were not redeemable into specie, they were given currency in two ways. First, they were accepted at the state treasury in tax payments. Second, the state passed a law that forced creditors to accept the notes in payment of existing debts or agree to delay collection for two years.

The commonwealth ideal was not unique to Kentucky. During the depression of the 1820s, Tennessee chartered the State Bank of Tennessee, Illinois chartered the State Bank of Illinois and Louisiana chartered the Louisiana State Bank. Although they took slightly different forms, they all had the same intent; namely, to relieve distressed and embarrassed farmers, planters and land owners. What all these banks shared in common was the notion that the state should promote the general welfare and economic growth. In this instance, and again during the depression of the 1840s, state-owned banks were organized to minimize the transfer of property when economic conditions demanded wholesale liquidation. Such liquidation would have been inefficient and imposed unnecessary hardship on a large fraction of the population. To the extent that hastily chartered relief banks forestalled inefficient liquidation, they served their purpose. Although most of these banks eventually became insolvent, requiring taxpayer bailouts, we cannot label them unsuccessful. They reinflated economies and allowed for an orderly disposal of property. Determining if the net benefits were positive or negative requires more research, but for the moment we are forced to accept the possibility that the region’s state-owned banks of the 1820s and 1840s advanced the commonweal.

Conclusion: Banks and Economic Growth

Despite notable differences in the specific form and structure of each region’s banking system, they were all aimed squarely at a common goal; namely, realizing that region’s economic potential. Banks helped achieve the goal in two ways. First, banks monetized economies, which reduced the costs of transacting and helped smooth consumption and production across time. It was no longer necessary for every farm family to inventory their entire harvest. They could sell most of it, and expend the proceeds on consumption goods as the need arose until the next harvest brought a new cash infusion. Crop and livestock inventories are prone to substantial losses and an increased use of money reduced them significantly. Second, banks provided credit, which unleashed entrepreneurial spirits and talents. A complete appreciation of early American banking recognizes the banks’ contribution to antebellum America’s economic growth.

Bibliographic Essay

Because of the large number of sources used to construct the essay, the essay was more readable and less cluttered by including a brief bibliographic essay. A full bibliography is included at the end.

Good general histories of antebellum banking include Dewey (1910), Fenstermaker (1965), Gouge (1833), Hammond (1957), Knox (1903), Redlich (1949), and Trescott (1963). If only one book is read on antebellum banking, Hammond’s (1957) Pulitzer-Prize winning book remains the best choice.

The literature on New England banking is not particularly large, and the more important historical interpretations of state-wide systems include Chadbourne (1936), Hasse (1946, 1957), Simonton (1971), Spencer (1949), and Stokes (1902). Gras (1937) does an excellent job of placing the history of a single bank within the larger regional and national context. In a recent book and a number of articles Lamoreaux (1994 and sources therein) provides a compelling and eminently readable reinterpretation of the region’s banking structure. Nathan Appleton (1831, 1856) provides a contemporary observer’s interpretation, while Walker (1857) provides an entertaining if perverse and satirical history of a fictional New England bank. Martin (1969) provides details of bank share prices and dividend payments from the establishment of the first banks in Boston through the end of the nineteenth century. Less technical studies of the Suffolk system include Lake (1947), Trivoli (1979) and Whitney (1878); more technical interpretations include Calomiris and Kahn (1996), Mullineaux (1987), and Rolnick, Smith and Weber (1998).

The literature on Middle Atlantic banking is huge, but the better state-level histories include Bryan (1899), Daniels (1976), and Holdsworth (1928). The better studies of individual banks include Adams (1978), Lewis (1882), Nevins (1934), and Wainwright (1953). Chaddock (1910) provides a general history of the Safety Fund system. Golembe (1960) places it in the context of modern deposit insurance, while Bodenhorn (1996) and Calomiris (1989) provide modern analyses. A recent revival of interest in free banking has brought about a veritable explosion in the number of studies on the subject, but the better introductory ones remain Rockoff (1974, 1985), Rolnick and Weber (1982, 1983), and Dwyer (1996).

The literature on southern and western banking is large and of highly variable quality, but I have found the following to be the most readable and useful general sources: Caldwell (1935), Duke (1895), Esary (1912), Golembe (1978), Huntington (1915), Green (1972), Lesesne (1970), Royalty (1979), Schweikart (1987) and Starnes (1931).

References and Further Reading

Adams, Donald R., Jr. Finance and Enterprise in Early America: A Study of Stephen Girard’s Bank, 1812-1831. Philadelphia: University of Pennsylvania Press, 1978.

Alter, George, Claudia Goldin and Elyce Rotella. “The Savings of Ordinary Americans: The Philadelphia Saving Fund Society in the Mid-Nineteenth-Century.” Journal of Economic History 54, no. 4 (December 1994): 735-67.

Appleton, Nathan. A Defence of Country Banks: Being a Reply to a Pamphlet Entitled ‘An Examination of the Banking System of Massachusetts, in Reference to the Renewal of the Bank Charters.’ Boston: Stimpson & Clapp, 1831.

Appleton, Nathan. Bank Bills or Paper Currency and the Banking System of Massachusetts with Remarks on Present High Prices. Boston: Little, Brown and Company, 1856.

Berry, Thomas Senior. Revised Annual Estimates of American Gross National Product: Preliminary Estimates of Four Major Components of Demand, 1789-1889. Richmond: University of Richmond Bostwick Paper No. 3, 1978.

Bodenhorn, Howard. “Zombie Banks and the Demise of New York’s Safety Fund.” Eastern Economic Journal 22, no. 1 (1996): 21-34.

Bodenhorn, Howard. “Private Banking in Antebellum Virginia: Thomas Branch & Sons of Petersburg.” Business History Review 71, no. 4 (1997): 513-42.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. Cambridge and New York: Cambridge University Press, 2000.

Bodenhorn, Howard. State Banking in Early America: A New Economic History. New York: Oxford University Press, 2002.

Bryan, Alfred C. A History of State Banking in Maryland. Baltimore: Johns Hopkins University Press, 1899.

Caldwell, Stephen A. A Banking History of Louisiana. Baton Rouge: Louisiana State University Press, 1935.

Calomiris, Charles W. “Deposit Insurance: Lessons from the Record.” Federal Reserve Bank of Chicago Economic Perspectives 13 (1989): 10-30.

Calomiris, Charles W., and Charles Kahn. “The Efficiency of Self-Regulated Payments Systems: Learnings from the Suffolk System.” Journal of Money, Credit, and Banking 28, no. 4 (1996): 766-97.

Chadbourne, Walter W. A History of Banking in Maine, 1799-1930. Orono: University of Maine Press, 1936.

Chaddock, Robert E. The Safety Fund Banking System in New York, 1829-1866. Washington, D.C.: Government Printing Office, 1910.

Daniels, Belden L. Pennsylvania: Birthplace of Banking in America. Harrisburg: Pennsylvania Bankers Association, 1976.

Davis, Lance, and Robert E. Gallman. “Capital Formation in the United States during the Nineteenth Century.” In Cambridge Economic History of Europe (Vol. 7, Part 2), edited by Peter Mathias and M.M. Postan, 1-69. Cambridge: Cambridge University Press, 1978.

Davis, Lance, and Robert E. Gallman. “Savings, Investment, and Economic Growth: The United States in the Nineteenth Century.” In Capitalism in Context: Essays on Economic Development and Cultural Change in Honor of R.M. Hartwell, edited by John A. James and Mark Thomas, 202-29. Chicago: University of Chicago Press, 1994.

Dewey, Davis R. State Banking before the Civil War. Washington, D.C.: Government Printing Office, 1910.

Duke, Basil W. History of the Bank of Kentucky, 1792-1895. Louisville: J.P. Morton, 1895.

Dwyer, Gerald P., Jr. “Wildcat Banking, Banking Panics, and Free Banking in the United States.” Federal Reserve Bank of Atlanta Economic Review 81, no. 3 (1996): 1-20.

Engerman, Stanley L., and Robert E. Gallman. “U.S. Economic Growth, 1783-1860.” Research in Economic History 8 (1983): 1-46.

Esary, Logan. State Banking in Indiana, 1814-1873. Indiana University Studies No. 15. Bloomington: Indiana University Press, 1912.

Fenstermaker, J. Van. The Development of American Commercial Banking, 1782-1837. Kent, Ohio: Kent State University, 1965.

Fenstermaker, J. Van, and John E. Filer. “Impact of the First and Second Banks of the United States and the Suffolk System on New England Bank Money, 1791-1837.” Journal of Money, Credit, and Banking 18, no. 1 (1986): 28-40.

Friedman, Milton, and Anna J. Schwartz. “Has the Government Any Role in Money?” Journal of Monetary Economics 17, no. 1 (1986): 37-62.

Gallman, Robert E. “American Economic Growth before the Civil War: The Testimony of the Capital Stock Estimates.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 79-115. Chicago: University of Chicago Press, 1992.

Goldsmith, Raymond. Financial Structure and Development. New Haven: Yale University Press, 1969.

Golembe, Carter H. “The Deposit Insurance Legislation of 1933: An Examination of its Antecedents and Purposes.” Political Science Quarterly 76, no. 2 (1960): 181-200.

Golembe, Carter H. State Banks and the Economic Development of the West. New York: Arno Press, 1978.

Gouge, William M. A Short History of Paper Money and Banking in the United States. Philadelphia: T.W. Ustick, 1833.

Gras, N.S.B. The Massachusetts First National Bank of Boston, 1784-1934. Cambridge, MA: Harvard University Press, 1937.

Green, George D. Finance and Economic Development in the Old South: Louisiana Banking, 1804-1861. Stanford: Stanford University Press, 1972.

Hammond, Bray. Banks and Politics in America from the Revolution to the Civil War. Princeton: Princeton University Press, 1957.

Hasse, William F., Jr. A History of Banking in New Haven, Connecticut. New Haven: privately printed, 1946.

Hasse, William F., Jr. A History of Money and Banking in Connecticut. New Haven: privately printed, 1957.

Holdsworth, John Thom. Financing an Empire: History of Banking in Pennsylvania. Chicago: S.J. Clarke Publishing Company, 1928.

Huntington, Charles Clifford. A History of Banking and Currency in Ohio before the Civil War. Columbus: F. J. Herr Printing Company, 1915.

Knox, John Jay. A History of Banking in the United States. New York: Bradford Rhodes & Company, 1903.

Kuznets, Simon. “Foreword.” In Financial Intermediaries in the American Economy, by Raymond W. Goldsmith. Princeton: Princeton University Press, 1958.

Lake, Wilfred. “The End of the Suffolk System.” Journal of Economic History 7, no. 4 (1947): 183-207.

Lamoreaux, Naomi R. Insider Lending: Banks, Personal Connections, and Economic Development in Industrial New England. Cambridge: Cambridge University Press, 1994.

Lesesne, J. Mauldin. The Bank of the State of South Carolina. Columbia: University of South Carolina Press, 1970.

Lewis, Lawrence, Jr. A History of the Bank of North America: The First Bank Chartered in the United States. Philadelphia: J.B. Lippincott & Company, 1882.

Lockard, Paul A. Banks, Insider Lending and Industries of the Connecticut River Valley of Massachusetts, 1813-1860. Unpublished Ph.D. thesis, University of Massachusetts, 2000.

Martin, Joseph G. A Century of Finance. New York: Greenwood Press, 1969.

Moulton, H.G. “Commercial Banking and Capital Formation.” Journal of Political Economy 26 (1918): 484-508, 638-63, 705-31, 849-81.

Mullineaux, Donald J. “Competitive Monies and the Suffolk Banking System: A Contractual Perspective.” Southern Economic Journal 53 (1987): 884-98.

Nevins, Allan. History of the Bank of New York and Trust Company, 1784 to 1934. New York: privately printed, 1934.

New York. Bank Commissioners. “Annual Report of the Bank Commissioners.” New York General Assembly Document No. 74. Albany, 1835.

North, Douglass. “Institutional Change in American Economic History.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 87-98. Stanford: Stanford University Press, 1994.

Rappaport, George David. Stability and Change in Revolutionary Pennsylvania: Banking, Politics, and Social Structure. University Park, PA: The Pennsylvania State University Press, 1996.

Redlich, Fritz. The Molding of American Banking: Men and Ideas. New York: Hafner Publishing Company, 1947.

Rockoff, Hugh. “The Free Banking Era: A Reexamination.” Journal of Money, Credit, and Banking 6, no. 2 (1974): 141-67.

Rockoff, Hugh. “New Evidence on the Free Banking Era in the United States.” American Economic Review 75, no. 4 (1985): 886-89.

Rolnick, Arthur J., and Warren E. Weber. “Free Banking, Wildcat Banking, and Shinplasters.” Federal Reserve Bank of Minneapolis Quarterly Review 6 (1982): 10-19.

Rolnick, Arthur J., and Warren E. Weber. “New Evidence on the Free Banking Era.” American Economic Review 73, no. 5 (1983): 1080-91.

Rolnick, Arthur J., Bruce D. Smith, and Warren E. Weber. “Lessons from a Laissez-Faire Payments System: The Suffolk Banking System (1825-58).” Federal Reserve Bank of Minneapolis Quarterly Review 22, no. 3 (1998): 11-21.

Royalty, Dale. “Banking and the Commonwealth Ideal in Kentucky, 1806-1822.” Register of the Kentucky Historical Society 77 (1979): 91-107.

Schumpeter, Joseph A. The Theory of Economic Development: An Inquiry into Profit, Capital, Credit, Interest, and the Business Cycle. Cambridge, MA: Harvard University Press, 1934.

Schweikart, Larry. Banking in the American South from the Age of Jackson to Reconstruction. Baton Rouge: Louisiana State University Press, 1987.

Simonton, William G. Maine and the Panic of 1837. Unpublished master’s thesis: University of Maine, 1971.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman. Chicago: University of Chicago Press, 1986.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Spencer, Charles, Jr. The First Bank of Boston, 1784-1949. New York: Newcomen Society, 1949.

Starnes, George T. Sixty Years of Branch Banking in Virginia. New York: Macmillan Company, 1931.

Stokes, Howard Kemble. Chartered Banking in Rhode Island, 1791-1900. Providence: Preston & Rounds Company, 1902.

Sylla, Richard. “Forgotten Men of Money: Private Bankers in Early U.S. History.” Journal of Economic History 36, no. 2 (1976):

Temin, Peter. The Jacksonian Economy. New York: W. W. Norton & Company, 1969.

Trescott, Paul B. Financing American Enterprise: The Story of Commercial Banking. New York: Harper & Row, 1963.

Trivoli, George. The Suffolk Bank: A Study of a Free-Enterprise Clearing System. London: The Adam Smith Institute, 1979.

U.S. Comptroller of the Currency. Annual Report of the Comptroller of the Currency. Washington, D.C.: Government Printing Office, 1931.

Wainwright, Nicholas B. History of the Philadelphia National Bank. Philadelphia: William F. Fell Company, 1953.

Walker, Amasa. History of the Wickaboag Bank. Boston: Crosby, Nichols & Company, 1857.

Wallis, John Joseph. “What Caused the Panic of 1839?” Unpublished working paper, University of Maryland, October 2000.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago: University of Chicago Press, 1992.

Whitney, David R. The Suffolk Bank. Cambridge, MA: Riverside Press, 1878.

Wright, Robert E. “Artisans, Banks, Credit, and the Election of 1800.” The Pennsylvania Magazine of History and Biography 122, no. 3 (July 1998), 211-239.

Wright, Robert E. “Bank Ownership and Lending Patterns in New York and Pennsylvania, 1781-1831.” Business History Review 73, no. 1 (Spring 1999), 40-60.

1 Banknotes were small demonination IOUs printed by banks and circulated as currency. Modern U.S. money are simply banknotes issued by the Federal Reserve Bank, which has a monopoly privilege in the issue of legal tender currency. In antebellum American, when a bank made a loan, the borrower was typically handed banknotes with a face value equal to the dollar value of the loan. The borrower then spent these banknotes in purchasing goods and services, putting them into circulation. Contemporary law held that banks were required to redeem banknotes into gold and silver legal tender on demand. Banks found it profitable to issue notes because they typically held about 30 percent of the total value of banknotes in circulation as reserves. Thus, banks were able to leverage $30 in gold and silver into $100 in loans that returned about 7 percent interest on average.

2 Paul Lockard (2000) challenges Lamoreaux’s interpretation. In a study of 4 banks in the Connecticut River valley, Lockard finds that insiders did not dominate these banks’ resources. As provocative as Lockard’s findings are, he draws conclusions from a small and unrepresentative sample. Two of his four sample banks were savings banks, which were designed as quasi-charitable organizations designed to encourage savings by the working classes and provide small loans. Thus, Lockard’s sample is effectively reduced to two banks. At these two banks, he identifies about 10 percent of loans as insider loans, but readily admits that he cannot always distinguish between insiders and outsiders. For a recent study of how early Americans used savings banks, see Alter, Goldin and Rotella (1994). The literature on savings banks is so large that it cannot be be given its due here.

3 Interbank clearing involves the settling of balances between banks. Modern banks cash checks drawn on other banks and credit the funds to the depositor. The Federal Reserve system provides clearing services between banks. The accepting bank sends the checks to the Federal Reserve, who credits the sending bank’s accounts and sends the checks back to the bank on which they were drawn for reimbursement. In the antebellum era, interbank clearing involved sending banknotes back to issuing banks. Because New England had so many small and scattered banks, the costs of returning banknotes to their issuers were large and sometimes avoided by recirculating notes of distant banks rather than returning them. Regular clearings and redemptions served an important purpose, however, because they kept banks in touch with the current market conditions. A massive redemption of notes was indicative of a declining demand for money and credit. Because the bank’s reserves were drawn down with the redemptions, it was forced to reduce its volume of loans in accord with changing demand conditions.

4 The law held that banknotes were redeemable on demand into gold or silver coin or bullion. If a bank refused to redeem even a single $1 banknote, the banknote holder could have the bank closed and liquidated to recover his or her claim against it.

5 Rappaport (1996) found that the bank’s loans were about equally divided between insiders (shareholders and shareholders’ family and business associates) and outsiders, but nonshareholders received loans about 30 percent smaller than shareholders. The issue remains about whether this bank was an “insider” bank, and depends largely on one’s definition. Any modern bank which made half of its loans to shareholders and their families would be viewed as an “insider” bank. It is less clear where the line can be usefully drawn for antebellum banks.

6 Real-bills lending followed from a nineteenth-century banking philosophy, which held that bank lending should be used to finance the warehousing or wholesaling of already-produced goods. Loans made on these bases were thought to be self-liquidating in that the loan was made against readily sold collateral actually in the hands of a merchant. Under the real-bills doctrine, the banks’ proper functions were to bridge the gap between production and retail sale of goods. A strict adherence to real-bills tenets excluded loans on property (mortgages), loans on goods in process (trade credit), or loans to start-up firms (venture capital). Thus, real-bills lending prescribed a limited role for banks and bank credit. Few banks were strict adherents to the doctrine, but many followed it in large part.

7 Robert E. Wright (1998) offers a different interpretation, but notes that Burr pushed the bill through at the end of a busy legislative session so that many legislators voted on the bill without having read it thoroughly

Citation: Bodenhorn, Howard. “Antebellum Banking in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/antebellum-banking-in-the-united-states/

The Economic History of Australia from 1788: An Introduction

Bernard Attard, University of Leicester

Introduction

The economic benefits of establishing a British colony in Australia in 1788 were not immediately obvious. The Government’s motives have been debated but the settlement’s early character and prospects were dominated by its original function as a jail. Colonization nevertheless began a radical change in the pattern of human activity and resource use in that part of the world, and by the 1890s a highly successful settler economy had been established on the basis of a favorable climate in large parts of the southeast (including Tasmania ) and the southwest corner; the suitability of land for European pastoralism and agriculture; an abundance of mineral wealth; and the ease with which these resources were appropriated from the indigenous population. This article will focus on the creation of a colonial economy from 1788 and its structural change during the twentieth century. To simplify, it will divide Australian economic history into four periods, two of which overlap. These are defined by the foundation of the ‘bridgehead economy’ before 1820; the growth of a colonial economy between 1820 and 1930; the rise of manufacturing and the protectionist state between 1891 and 1973; and the experience of liberalization and structural change since 1973. The article will conclude by suggesting briefly some of the similarities between Australia and other comparable settler economies, as well as the ways in which it has differed from them.

The Bridgehead Economy, 1788-1820

The description ‘bridgehead economy’ was used by one of Australia’s foremost economic historians, N. G. Butlin to refer to the earliest decades of British occupation when the colony was essentially a penal institution. The main settlements were at Port Jackson (modern Sydney, 1788) in New South Wales and Hobart (1804) in what was then Van Diemen’s Land (modern Tasmania). The colony barely survived its first years and was largely neglected for much of the following quarter-century while the British government was preoccupied by the war with France. An important beginning was nevertheless made in the creation of a private economy to support the penal regime. Above all, agriculture was established on the basis of land grants to senior officials and emancipated convicts, and limited freedoms were allowed to convicts to supply a range of goods and services. Although economic life depended heavily on the government Commissariat as a supplier of goods, money and foreign exchange, individual rights in property and labor were recognized, and private markets for both started to function. In 1808, the recall of the New South Wales Corps, whose officers had benefited most from access to land and imported goods (thus hopelessly entangling public and private interests), coupled with the appointment of a new governor, Lachlan Macquarie, in the following year, brought about a greater separation of the private economy from the activities and interests of the colonial government. With a significant increase in the numbers transported after 1810, New South Wales’ future became more secure. As laborers, craftsmen, clerks and tradesmen, many convicts possessed the skills required in the new settlements. As their terms expired, they also added permanently to the free population. Over time, this would inevitably change the colony’s character.

Natural Resources and the Colonial Economy, 1820-1930

Pastoral and Rural Expansion

For Butlin, the developments around 1810 were a turning point in the creation of a ‘colonial’ economy. Many historians have preferred to view those during the 1820s as more significant. From that decade, economic growth was based increasingly upon the production of fine wool and other rural commodities for markets in Britain and the industrializing economies of northwestern Europe. This growth was interrupted by two major depressions during the 1840s and 1890s and stimulated in complex ways by the rich gold discoveries in Victoria in 1851, but the underlying dynamics were essentially unchanged. At different times, the extraction of natural resources, whether maritime before the 1840s or later gold and other minerals, was also important. Agriculture, local manufacturing and construction industries expanded to meet the immediate needs of growing populations, which concentrated increasingly in the main urban centers. The colonial economy’s structure, growth of population and significance of urbanization are illustrated in tables 1 and 2. The opportunities for large profits in pastoralism and mining attracted considerable amounts of British capital, while expansion generally was supported by enormous government outlays for transport, communication and urban infrastructures, which also depended heavily on British finance. As the economy expanded, large-scale immigration became necessary to satisfy the growing demand for workers, especially after the end of convict transportation to the eastern mainland in 1840. The costs of immigration were subsidized by colonial governments, with settlers coming predominantly from the United Kingdom and bringing skills that contributed enormously to the economy’s growth. All this provided the foundation for the establishment of free colonial societies. In turn, the institutions associated with these — including the rule of law, secure property rights, and stable and democratic political systems — created conditions that, on balance, fostered growth. In addition to New South Wales, four other British colonies were established on the mainland: Western Australia (1829), South Australia (1836), Victoria (1851) and Queensland (1859). Van Diemen’s Land (Tasmania after 1856) became a separate colony in 1825. From the 1850s, these colonies acquired responsible government. In 1901, they federated, creating the Commonwealth of Australia.

Table 1
The Colonial Economy: Percentage Shares of GDP, 1891 Prices, 1861-1911

Pastoral Other rural Mining Manuf. Building Services Rent
1861 9.3 13.0 17.5 14.2 8.4 28.8 8.6
1891 16.1 12.4 6.7 16.6 8.5 29.2 10.3
1911 14.8 16.7 9.0 17.1 5.3 28.7 8.3

Source: Haig (2001), Table A1. Totals do not sum to 100 because of rounding.

Table 2
Colonial Populations (thousands), 1851-1911

Australia Colonies Cities
NSW Victoria Sydney Melbourne
1851 257 100 46 54 29
1861 669 198 328 96 125
1891 1,704 608 598 400 473
1911 2,313 858 656 648 593

Source: McCarty (1974), p. 21; Vamplew (1987), POP 26-34.

The process of colonial growth began with two related developments. First, in 1820, Macquarie responded to land pressure in the districts immediately surrounding Sydney by relaxing restrictions on settlement. Soon the outward movement of herdsmen seeking new pastures became uncontrollable. From the 1820s, the British authorities also encouraged private enterprise by the wholesale assignment of convicts to private employers and easy access to land. In 1831, the principles of systematic colonization popularized by Edward Gibbon Wakefield (1796-1862) were put into practice in New South Wales with the substitution of land sales for grants in order to finance immigration. This, however, did not affect the continued outward movement of pastoralists who simply occupied land where could find it beyond the official limits of settlement. By 1840, they had claimed a vast swathe of territory two hundred miles in depth running from Moreton Bay in the north (the site of modern Brisbane) through the Port Phillip District (the future colony of Victoria, whose capital Melbourne was marked out in 1837) to Adelaide in South Australia. The absence of any legal title meant that these intruders became known as ‘squatters’ and the terms of their tenure were not finally settled until 1846 after a prolonged political struggle with the Governor of New South Wales, Sir George Gipps.

The impact of the original penal settlements on the indigenous population had been enormous. The consequences of squatting after 1820 were equally devastating as the land and natural resources upon which indigenous hunter-gathering activities and environmental management depended were appropriated on a massive scale. Aboriginal populations collapsed in the face of disease, violence and forced removal until they survived only on the margins of the new pastoral economy, on government reserves, or in the arid parts of the continent least touched by white settlement. The process would be repeated again in northern Australia during the second half of the century.

For the colonists this could happen because Australia was considered terra nullius, vacant land freely available for occupation and exploitation. The encouragement of private enterprise, the reception of Wakefieldian ideas, and the wholesale spread of white settlement were all part of a profound transformation in official and private perceptions of Australia’s prospects and economic value as a British colony. Millennia of fire-stick management to assist hunter-gathering had created inland grasslands in the southeast that were ideally suited to the production of fine wool. Both the physical environment and the official incentives just described raised expectations of considerable profits to be made in pastoral enterprise and attracted a growing stream of British capital in the form of organizations like the Australian Agricultural Company (1824); new corporate settlements in Western Australia (1829) and South Australia (1836); and, from the 1830s, British banks and mortgage companies formed to operate in the colonies. By the 1830s, wool had overtaken whale oil as the colony’s most important export, and by 1850 New South Wales had displaced Germany as the main overseas supplier to British industry (see table 3). Allowing for the colonial economy’s growing complexity, the cycle of growth based upon land settlement, exports and British capital would be repeated twice. The first pastoral boom ended in a depression which was at its worst during 1842-43. Although output continued to grow during the 1840s, the best land had been occupied in the absence of substantial investment in fencing and water supplies. Without further geographical expansion, opportunities for high profits were reduced and the flow of British capital dried up, contributing to a wider downturn caused by drought and mercantile failure.

Table 3
Imports of Wool into Britain (thousands of bales), 1830-50

German Australian
1830 74.5 8.0
1840 63.3 41.0
1850 30.5 137.2

Source: Sinclair (1976), p. 46

When pastoral growth revived during the 1860s, borrowed funds were used to fence properties and secure access to water. This in turn allowed a further extension of pastoral production into the more environmentally fragile semi-arid interior districts of New South Wales, particularly during the 1880s. As the mobs of sheep moved further inland, colonial governments increased the scale of their railway construction programs, some competing to capture the freight to ports. Technical innovation and government sponsorship of land settlement brought greater diversity to the rural economy (see table 4). Exports of South Australian wheat started in the 1870s. The development of drought resistant grain varieties from the turn of the century led to an enormous expansion of sown acreage in both the southeast and southwest. From the 1880s, sugar production increased in Queensland, although mainly for the domestic market. From the 1890s, refrigeration made it possible to export meat, dairy products and fruit.

Table 4
Australian Exports (percentages of total value of exports), 1881-1928/29

Wool Minerals Wheat,flour Butter Meat Fruit
1881-90 54.1 27.2 5.3 0.1 1.2 0.2
1891-1900 43.5 33.1 2.9 2.4 4.1 0.3
1901-13 34.3 35.4 9.7 4.1 5.1 0.5
1920/21-1928/29 42.9 8.8 20.5 5.6 4.6 2.2

Source: Sinclair (1976), p. 166

Gold and Its Consequences

Alongside rural growth and diversification, the remarkable gold discoveries in central Victoria in 1851 brought increased complexity to the process of economic development. The news sparked an immediate surge of gold seekers into the colony, which was soon reinforced by a flood of overseas migrants. Until the 1870s, gold displaced wool as Australia’s most valuable export. Rural industries either expanded output (wheat in South Australia) or, in the case of pastoralists, switched production to meat and tallow, to supply a much larger domestic market. Minerals had been extracted since earliest settlement and, while yields on the Victorian gold fields soon declined, rich mineral deposits continued to be found. During the 1880s alone these included silver, lead and zinc at Broken Hill in New South Wales; copper at Mount Lyell in Tasmania; and gold at Charters Towers and Mount Morgan in Queensland. From 1893, what eventually became the richest goldfields in Australia were discovered at Coolgardie in Western Australia. The mining industry’s overall contribution to output and exports is illustrated in tables 1 and 4.

In Victoria, the deposits of easily extracted alluvial gold were soon exhausted and mining was taken over by companies that could command the financial and organizational resources needed to work the deep lodes. But the enormous permanent addition to the colonial population caused by the gold rush had profound effects throughout eastern Australia, dramatically accelerating the growth of the local market and workforce, and deeply disturbing the social balance that had emerged during the decade before. Between 1851 and 1861, the Australian population more than doubled. In Victoria it increased sevenfold; Melbourne outgrew Sydney, Chicago and San Francisco (see table 2). Significantly enlarged populations required social infrastructure, political representation, employment and land; and the new colonial legislatures were compelled to respond. The way this was played out varied between colonies but the common outcomes were the introduction of manhood suffrage, access to land through ‘free selection’ of small holdings, and, in the Victorian case, the introduction of a protectionist tariff in 1865. The particular age structure of the migrants of the 1850s also had long-term effects on the building cycle, notably in Victoria. The demand for housing accelerated during the 1880s, as the children of the gold generation matured and established their own households. With pastoral expansion and public investment also nearing their peaks, the colony experienced a speculative boom which added to the imbalances already being caused by falling export prices and rising overseas debt. The boom ended with the wholesale collapse of building companies, mortgage banks and other financial institutions during 1891-92 and the stoppage of much of the banking system during 1893.

The depression of the 1890s was worst in Victoria. Its impact on employment was softened by the Western Australian gold discoveries, which drew population away, but the colonial economy had grown to such an extent since the 1850s that the stimulus provided by the earlier gold finds could not be repeated. Severe drought in eastern Australia from the mid-1890s until 1903 caused the pastoral industry to contract. Yet, as we have seen, technological innovation also created opportunities for other rural producers, who were now heavily supported by government with little direct involvement by foreign investors. The final phase of rural expansion, with its associated public investment in rural (and increasingly urban) infrastructure continued until the end of the 1920s. Yields declined, however, as farmers moved onto the most marginal land. The terms of trade also deteriorated with the oversupply of several commodities in world markets after the First World War. As a result, the burden of servicing foreign debt rose once again. Australia’s position as a capital importer and exporter of natural resources meant that the Great Depression arrived early. From late 1929, the closure of overseas capital markets and collapse of export prices forced the Federal Government to take drastic measures to protect the balance of payments. The falls in investment and income transmitted the contraction to the rest of the economy. By 1932, average monthly unemployment amongst trade union members was over 22 percent. Although natural resource industries continued to have enduring importance as earners of foreign exchange, the Depression finally ended the long period in which land settlement and technical innovation had together provided a secure foundation for economic growth.

Manufacturing and the Protected Economy, 1891-1973

The ‘Australian Settlement’

There is a considerable chronological overlap between the previous section, which surveyed the growth of a colonial economy during the nineteenth century based on the exploitation of natural resources, and this one because it is a convenient way of approaching the two most important developments in Australian economic history between Federation and the 1970s: the enormous increase in government regulation after 1901 and, closely linked to this, the expansion of domestic manufacturing, which from the Second World War became the most dynamic part of the Australian economy.

The creation of the Commonwealth of Australia on 1 January 1901 broadened the opportunities for public intervention in private markets. The new Federal Government was given clearly-defined but limited powers over obviously ‘national’ matters like customs duties. The rest, including many affecting economic development and social welfare, remained with the states. The most immediate economic consequence was the abolition of inter-colonial tariffs and the establishment of a single Australian market. But the Commonwealth also soon set about transferring to the national level several institutions that different the colonies had experimented with during the 1890s. These included arrangements for the compulsory arbitration of industrial disputes by government tribunals, which also had the power to fix wages, and a discriminatory ‘white Australia’ immigration policy designed to exclude non-Europeans from the labor market. Both were partly responses to organized labor’s electoral success during the 1890s. Urban business and professional interests had always been represented in colonial legislatures; during the 1910s, rural producers also formed their own political parties. Subsequently, state and federal governments were typically formed by the either Australian Labor Party or coalitions of urban conservatives and the Country Party. The constituencies they each represented were thus able to influence the regulatory structure to protect themselves against the full impact of market outcomes, whether in the form of import competition, volatile commodity prices or uncertain employment conditions. The institutional arrangements they created have been described as the ‘Australian settlement’ because they balanced competing producer interests and arguably provided a stable framework for economic development until the 1970s, despite the inevitable costs.

The Growth of Manufacturing

An important part of the ‘Australian settlement’ was the imposition of a uniform federal tariff and its eventual elaboration into a system of ‘protection all round’. The original intended beneficiaries were manufacturers and their employees; indeed, when the first protectionist tariff was introduced in 1907, its operation was linked to the requirement that employers pay their workers ‘fair and reasonable wages’. Manufacturing’s actual contribution to economic growth before Federation has been controversial. The population influx of the 1850s widened opportunities for import-substitution but the best evidence suggests that manufacturing grew slowly as the industrial workforce increased (see table 1). Production was small-scale and confined largely to the processing of rural products and raw materials; assembly and repair-work; or the manufacture of goods for immediate consumption (e.g. soap and candle-making, brewing and distilling). Clothing and textile output was limited to a few lines. For all manufacturing, growth was restrained by the market’s small size and the limited opportunities for technical change it afforded.

After Federation, production was stimulated by several factors: rural expansion, the increasing use of agricultural machinery and refrigeration equipment, and the growing propensity of farm incomes to be spent locally. The removal of inter-colonial tariffs may also have helped. The statistical evidence indicates that between 1901 and the outbreak of the First World War manufacturing grew faster than the economy as a whole, while output per worker increased. But manufacturers also aspired mainly to supply the domestic market and expended increasing energy on retaining privileged access. Tariffs rose considerably between the two world wars. Some sectors became more capital intensive, particularly with the establishment of a local steel industry, the beginnings of automobile manufacture, and the greater use of electricity. But, except during the first half of the 1920s, there was little increase in labor productivity and the inter-war expansion of textile manufacturing reflected the heavy bias towards import substitution. Not until the Second World War and after did manufacturing growth accelerate and extend to those sectors most characteristic of an advance industrial economy (table 5). Amongst these were automobiles, chemicals, electrical and electronic equipment, and iron-and-steel. Growth was sustained during 1950s by similar factors to those operating in other countries during the ‘long boom’, including a growing stream of American direct investment, access to new and better technology, and stable conditions of full employment.

Table 5
Manufacturing and the Australian Economy, 1913-1949

1938-39 prices
Manufacturing share of GDP % Manufacturing annual rate of growth % GDP, annual rate of growth %
1913/14 21.9
1928/29 23.6 2.6 2.1
1948/49 29.8 3.4 2.2

Calculated from Haig (2001), Table A2. Rates of change are average annual changes since the previous year in the first column.

Manufacturing peaked in the mid-1960s at about 28 percent of national output (measured in 1968-69 prices) but natural resource industries remained the most important suppliers of exports. Since the 1920s, over-supply in world markets and the need to compensate farmers for manufacturing protection, had meant that virtually all rural industries, with the exception of wool, had been drawn into a complicated system of subsidies, price controls and market interventions at both federal and state levels. The post-war boom in the world economy increased demand for commodities, benefiting rural producers but also creating new opportunities for Australian miners. Most important of all, the first surge of breakneck growth in East Asia opened a vast new market for iron ore, coal and other mining products. Britain’s significance as a trading partner had declined markedly since the 1950s. By the end of the 1960s, Japan overtook it as Australia’s largest customer, while the United States was now the main provider of imports.

The mining bonanza contributed to the boom conditions experienced generally after 1950. The Federal Government played its part by using the full range of macroeconomic policies that were also increasingly familiar in similar western countries to secure stability and full employment. It encouraged high immigration, relaxing the entry criteria to allow in large numbers of southern Europeans, who added directly to the workforce, but also brought knowledge and experience. With state governments, the Commonwealth increased expenditure on education significantly, effectively entering the field for the first time after 1945. Access to secondary education was widened with the abandonment of fees in government schools and federal finance secured an enormous expansion of university places, especially after 1960. Some weaknesses remained. Enrolment rates after primary school were below those in many industrial countries and funding for technical education was poor. Despite this, the Australian population’s rising levels of education and skill continued to be important additional sources of growth. Finally, although government advisers expressed misgivings, industry policy remained determinedly interventionist. While state governments competed to attract manufacturing investment with tax and other incentives, by the 1960s protection had reached its highest level, with Australia playing virtually no part in the General Agreement on Tariffs and Trade (GATT), despite being an original signatory. The effects of rising tariffs since 1900 were evident in the considerable decline in Australia’s openness to trade (Table 6). Yet, as the post-war boom approached its end, the country still relied upon commodity exports and foreign investment to purchase the manufactures it was unable to produce itself. The impossibility of sustaining growth in this way was already becoming clear, even though the full implications would only be felt during the decades to come.

Table 6
Trade (Exports Plus Imports)
as a Share of GDP, Current Prices, %

1900/1 44.9
1928/29 36.9
1938/38 32.7
1964/65 33.3
1972/73 29.5

Calculated from Vamplew (1987), ANA 119-129.

Liberalization and Structural Change, 1973-2005

From the beginning of the 1970s, instability in the world economy and weakness at home ended Australia’s experience of the post-war boom. During the following decades, manufacturing’s share in output (table 7) and employment fell, while the long-term relative decline of commodity prices meant that natural resources could no longer be relied on to cover the cost of imports, let alone the long-standing deficits in payments for services, migrant remittances and interest on foreign debt. Until the early 1990s, Australia also suffered from persistent inflation and rising unemployment (which remained permanently higher, see chart 1). As a consequence, per capita incomes fluctuated during the 1970s, and the economy contracted in absolute terms during 1982-83 and 1990-91.

Even before the 1970s, new sources of growth and rising living standards had been needed, but the opportunities for economic change were restricted by the elaborate regulatory structure that had evolved since Federation. During that decade itself, policy and outlook were essentially defensive and backward looking, despite calls for reform and some willingness to alter the tariff. Governments sought to protect employment in established industries, while dependence on mineral exports actually increased as a result of the commodity booms at the decade’s beginning and end. By the 1980s, however, it was clear that the country’s existing institutions were failing and fundamental reform was required.

Table 7
The Australian Economy, 1974-2004

A. Percentage shares of value-added, constant prices

1974 1984 1994 2002
Agriculture 4.4 4.3 3.0 2.7
Manufacturing 18.1 15.2 13.3 11.8
Other industry, inc. mining 14.2 14.0 14.6 14.4
Services 63.4 66.4 69.1 71.1

B. Per capita GDP, annual average rate of growth %, constant prices

1973-84 1.2
1984-94 1.7
1994-2004 2.5

Calculated from World Bank, World Development Indicators (Sept. 2005).

Figure 1
Unemployment, 1971-2005, percent

Unemployment, 1971-2005, percent

Source: Reserve Bank of Australia (1988); Reserve Bank of Australia, G07Hist.xls. Survey data at August. The method of data collection changed in 1978.

The catalyst was the resumption of the relative fall of commodity prices since the Second World War which meant that the cost of purchasing manufactured goods inexorably rose for primary producers. The decline had been temporarily reversed by the oil shocks of the 1970s but, from the 1980/81 financial year until the decade’s end, the value of Australia’s merchandise imports exceeded that of merchandise exports in every year but two. The overall deficit on current account measured as a proportion of GDP also moved became permanently higher, averaging around 4.7 percent. During the 1930s, deflation had been followed by the further closing of the Australian economy. There was no longer much scope for this. Manufacturing had stagnated since the 1960s, suffering especially from the inflation of wage and other costs during the 1970s. It was particularly badly affected by the recession of 1982-83, when unemployment rose to almost ten percent, its highest level since the Great Depression. In 1983, a new federal Labor Government led by Bob Hawke sought to engineer a recovery through an ‘Accord’ with the trade union movement which aimed at creating employment by holding down real wages. But under Hawke and his Treasurer, Paul Keating — who warned colorfully that otherwise the country risked becoming a ‘banana republic’ — Labor also started to introduce broader reforms to increase the efficiency of Australian firms by improving their access to foreign finance and exposing them to greater competition. Costs would fall and exports of more profitable manufactures increase, reducing the economy’s dependence on commodities. During the 1980s and 1990s, the reforms deepened and widened, extending to state governments and continuing with the election of a conservative Liberal-National Party government under John Howard in 1996, as each act of deregulation invited further measures to consolidate them and increase their effectiveness. Key reforms included the floating of the Australian dollar and the deregulation of the financial system; the progressive removal of protection of most manufacturing and agriculture; the dismantling of the centralized system of wage-fixing; taxation reform; and the promotion of greater competition and better resource use through privatization and the restructuring of publicly-owned corporations, the elimination of government monopolies, and the deregulation of sectors like transport and telecommunications. In contrast with the 1930s, the prospects of further domestic reform were improved by an increasingly favorable international climate. Australia contributed by joining other nations in the Cairns Group to negotiate reductions of agricultural protection during the Uruguay round of GATT negotiations and by promoting regional liberalization through the Asia Pacific Economic Cooperation (APEC) forum.

Table 8
Exports and Openness, 1983-2004

Shares of total exports, % Shares of GDP: exports + imports, %
Goods Services
Rural Resource Manuf. Other
1983 30 34 9 3 24 26
1989 23 37 11 5 24 27
1999 20 34 17 4 24 37
2004 18 33 19 6 23 39

Calculated from: Reserve Bank of Australia, G10Hist.xls and H03Hist.xls; World Bank, World Development Indicators (Sept. 2005). Chain volume measures, except shares of GDP, 1983, which are at current prices.

The extent to which institutional reform had successfully brought about long-term structural change was still not clear at the end of the century. Recovery from the 1982-83 recession was based upon a strong revival of employment. By contrast, the uninterrupted growth experienced since 1992 arose from increases in the combined productivity of workers and capital. If this persisted, it was a historic change in the sources of growth from reliance on the accumulation of capital and the increase of the workforce to improvements in the efficiency of both. From the 1990s, the Australian economy also became more open (table 8). Manufactured goods increased their share of exports, while rural products continued to decline. Yet, although growth was more broadly-based, rapid and sustained (table 7), the country continued to experience large trade and current account deficits, which were augmented by the considerable increase of foreign debt after financial deregulation during the 1980s. Unemployment also failed to return to its pre-1974 level of around 2 percent, although much of the permanent rise occurred during the mid to late 1970s. In 2005, it remained 5 percent (Figure 1). Institutional reform clearly contributed to these changes in economic structure and performance but they were also influenced by other factors, including falling transport costs, the communications and information revolutions, the greater openness of the international economy, and the remarkable burst of economic growth during the century’s final decades in southeast and east Asia, above all China. Reform was also complemented by policies to provide the skills needed in a technologically-sophisticated, increasingly service-oriented economy. Retention rates in the last years of secondary education doubled during the 1980s, followed by a sharp increase of enrolments in technical colleges and universities. By 2002, total expenditure on education as a proportion of national income had caught up with the average of member countries of the OECD (Table 9). Shortages were nevertheless beginning to be experienced in the engineering and other skilled trades, raising questions about some priorities and the diminishing relative financial contribution of government to tertiary education.

Table 9
Tertiary Enrolments and Education Expenditure, 2002

Tertiary enrolments, gross percent Education expenditure as a proportion of GDP, percent
Australia 63.22 6.0
OECD 61.68 5.8
United States 70.67 7.2

Source: World Bank, World Development Indicators (Sept. 2005); OECD (2005). Gross enrolments are total enrolments, regardless of age, as a proportion of the population in the relevant official age group. OECD enrolments are for fifteen high-income members only.

Summing Up: The Australian Economy in a Wider Context

Virtually since the beginning of European occupation, the Australian economy had provided the original British colonizers, generations of migrants, and the descendants of both with a remarkably high standard of living. Towards the end of the nineteenth century, this was by all measures the highest in the world (see table 10). After 1900, national income per member of the population slipped behind that of several countries, but continued to compare favorably with most. In 2004, Australia was ranked behind only Norway and Sweden in the United Nation’s Human Development Index. Economic historians have differed over the sources of growth that made this possible. Butlin emphasized the significance of local factors like the unusually high rate of urbanization and the expansion of domestic manufacturing. In important respects, however, Australia was subject to the same forces as other European settler societies in New Zealand and Latin America, and its development bore striking similarities to theirs. From the 1820s, its economy grew as one frontier of an expanding western capitalism. With its close institutional ties to, and complementarities with, the most dynamic parts of the world economy, it drew capital and migrants from them, supplied them with commodities, and shared the benefits of their growth. Like other settler societies, it sought population growth as an end in itself and, from the turn of the nineteenth century, aspired to the creation of a national manufacturing base. Finally, when openness to the world economy appeared to threaten growth and living standards, governments intervened to regulate and protect with broader social objectives in mind. But there were also striking contrasts with other settler economies, notably those in Latin America like Argentina, with which it has been frequently compared. In particular, Australia responded to successive challenges to growth by finding new opportunities for wealth creation with a minimum of political disturbance, social conflict or economic instability, while sharing a rising national income as widely as possible.

Table 10
Per capita GDP in Australia, United States and Argentina
(1990 international dollars)

Australia United States Argentina
1870 3,641 2,457 1,311
1890 4,433 3,396 2,152
1950 7,493 9,561 4,987
1998 20,390 27,331 9,219

Sources: Australia: GDP, Haig (2001) as converted in Maddison (2003); all other data Maddison (1995) and (2001)

From the mid-twentieth century, Australia’s experience also resembled that of many advanced western countries. This included the post-war willingness to use macroeconomic policy to maintain growth and full employment; and, after the 1970s, the abandonment of much government intervention in private markets while at the same time retaining strong social services and seeking to improve education and training. Australia also experienced a similar relative decline of manufacturing, permanent rise of unemployment, and transition to a more service-based economy typical of high income countries. By the beginning of the new millennium, services accounted for over 70 percent of national income (table 7). Australia remained vulnerable as an exporter of commodities and importer of capital but its endowment of natural resources and the skills of its population were also creating opportunities. The country was again favorably positioned to take advantage of growth in the most dynamic parts of the world economy, particularly China. With the final abandonment of the White Australia policy during the 1970s, it had also started to integrate more closely with its region. This was further evidence of the capacity to change that allowed Australians to face the future with confidence.

References:

Anderson, Kym. “Australia in the International Economy.” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 33-49. Cambridge: Cambridge University Press, 2001.

Blainey, Geoffrey. The Rush that Never Ended: A History of Australian Mining, fourth edition. Melbourne: Melbourne University Press, 1993.

Borland, Jeff. “Unemployment.” In Reshaping Australia’s Economy: Growth and with Equity and Sustainable Development, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 207-228. Cambridge: Cambridge University Press, 2001.

Butlin, N. G. Australian Domestic Product, Investment and Foreign Borrowing 1861-1938/39. Cambridge: Cambridge University Press, 1962.

Butlin, N.G. Economics and the Dreamtime, A Hypothetical History. Cambridge: Cambridge University Press, 1993.

Butlin, N.G. Forming a Colonial Economy: Australia, 1810-1850. Cambridge: Cambridge University Press, 1994.

Butlin, N.G. Investment in Australian Economic Development, 1861-1900. Cambridge: Cambridge University Press, 1964.

Butlin, N. G., A. Barnard and J. J. Pincus. Government and Capitalism: Public and Private Choice in Twentieth Century Australia. Sydney: George Allen and Unwin, 1982.

Butlin, S. J. Foundations of the Australian Monetary System, 1788-1851. Sydney: Sydney University Press, 1968.

Chapman, Bruce, and Glenn Withers. “Human Capital Accumulation: Education and Immigration.” In Reshaping Australia’s economy: growth with equity and sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 242-267. Cambridge: Cambridge University Press, 2001.

Dowrick, Steve. “Productivity Boom: Miracle or Mirage?” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 19-32. Cambridge: Cambridge University Press, 2001.

Economist. “Has he got the ticker? A survey of Australia.” 7 May 2005.

Haig, B. D. “Australian Economic Growth and Structural Change in the 1950s: An International Comparison.” Australian Economic History Review 18, no. 1 (1978): 29-45.

Haig, B.D. “Manufacturing Output and Productivity 1910 to 1948/49.” Australian Economic History Review 15, no. 2 (1975): 136-61.

Haig, B.D. “New Estimates of Australian GDP: 1861-1948/49.” Australian Economic History Review 41, no. 1 (2001): 1-34.

Haig, B. D., and N. G. Cain. “Industrialization and Productivity: Australian Manufacturing in the 1920s and 1950s.” Explorations in Economic History 20, no. 2 (1983): 183-98.

Jackson, R. V. Australian Economic Development in the Nineteenth Century. Canberra: Australian National University Press, 1977.

Jackson, R.V. “The Colonial Economies: An Introduction.” Australian Economic History Review 38, no. 1 (1998): 1-15.

Kelly, Paul. The End of Certainty: The Story of the 1980s. Sydney: Allen and Unwin, 1992.

Macintyre, Stuart. A Concise History of Australia. Cambridge: Cambridge University Press, 1999.

McCarthy, J. W. “Australian Capital Cities in the Nineteenth Century.” In Urbanization in Australia; The Nineteenth Century, edited by J. W. McCarthy and C. B. Schedvin, 9-39. Sydney: Sydney University Press, 1974.

McLean, I.W. “Australian Economic Growth in Historical Perspective.” The Economic Record 80, no. 250 (2004): 330-45.

Maddison, Angus. Monitoring the World Economy 1820-1992. Paris: OECD, 1995.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

Meredith, David, and Barrie Dyster. Australia in the Global Economy: Continuity and Change. Cambridge: Cambridge University Press, 1999.

Nicholas, Stephen, editor. Convict Workers: Reinterpreting Australia’s Past. Cambridge: Cambridge University Press, 1988.

OECD. Education at a Glance 2005 – Tables OECD, 2005 [cited 9 February 2006]. Available from http://www.oecd.org/document/11/0,2340,en_2825_495609_35321099_1_1_1_1,00.html.

Pope, David, and Glenn Withers. “The Role of Human Capital in Australia’s Long-Term Economic Growth.” Paper presented to 24th Conference of Economists, Adelaide, 1995.

Reserve Bank of Australia. “Australian Economic Statistics: 1949-50 to 1986-7: I Tables.” Occasional Paper No. 8A (1988).

Reserve Bank of Australia. Current Account – Balance of Payments – H1 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/H01bhist.xls.

Reserve Bank of Australia. Gross Domestic Product – G10 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/G10hist.xls.

Reserve Bank of Australia. Unemployment – Labour Force – G1 [cited 2 February 2006]. Available from http://www.rba.gov.au/Statistics/Bulletin/G07hist.xls.

Schedvin, C. B. Australia and the Great Depression: A Study of Economic Development and Policy in the 120s and 1930s. Sydney: Sydney University Press, 1970.

Schedvin, C.B. “Midas and the Merino: A Perspective on Australian Economic History.” Economic History Review 32, no. 4 (1979): 542-56.

Sinclair, W. A. The Process of Economic Development in Australia. Melbourne: Longman Cheshire, 1976.

United Nations Development Programme. Human Development Index [cited 29 November 2005]. Available from http://hdr.undp.org/statistics/data/indicators.cfm?x=1&y=1&z=1.

Vamplew, Wray, ed. Australians: Historical Statistics. Edited by Alan D. Gilbert and K. S. Inglis, Australians: A Historical Library. Sydney: Fairfax, Syme and Weldon Associates, 1987.

White, Colin. Mastering Risk: Environment, Markets and Politics in Australian Economic History. Melbourne: Oxford University Press, 1992.

World Bank. World Development Indicators ESDS International, University of Manchester, September 2005 [cited 29 November 2005]. Available from http://www.esds.ac.uk/International/Introduction.asp.

Citation: Attard, Bernard. “The Economic History of Australia from 1788: An Introduction”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/the-economic-history-of-australia-from-1788-an-introduction/

Advertising Bans in the United States

Jon P. Nelson, Pennsylvania State University

Freedom of expression has always ranked high on the American scale of values and fundamental rights. This essay addresses regulation of “commercial speech,” which is defined as speech or messages that propose a commercial transaction. Regulation of commercial advertising occurs in several forms, but it is often controversial. In 1938, the Federal Trade Commission (FTC) was given the authority to regulate “unfair or deceptive” advertising. Congressional hearings were first held in 1939 on proposals to ban radio advertising of alcohol beverages (Russell 1940; U.S. Congress 1939, 1952). Actions by the FTC during 1964-69 led to the 1971 ban of radio and television advertising of cigarettes. In 1997, the distilled spirits industry reversed a six decade-old policy and began using cable television advertising. Numerous groups immediately called for removal of the ads, and Rep. Joseph Kennedy II (D, MA) introduced a “Just Say No” bill that would have banned all alcohol advertisements from the airways. In 1998, the Master Settlement Agreement between that state attorneys general and the tobacco industry put an end to billboard advertising of cigarettes. Do these regulations make any difference for the demand for alcohol or cigarettes? When will an advertising ban increase consumer welfare? What legal standards apply to commercial speech that affect the extent and manner in which governments can restrict advertising?

For many years, the Supreme Court held that the broad powers of government to regulate commerce included the “lesser power” to restrict commercial speech.1 In Valentine (1942), the Court held that the First Amendment does not protect “purely commercial advertising.” This view was applied when the courts upheld the ban of broadcast advertising of cigarettes, 333 F. Supp 582 (1971), affirmed per curiam, 405 U.S. 1000 (1972). However, in the mid-1970s this view began to change as the Court invalidated several state regulations affecting advertising of services and products such as abortion providers and pharmaceutical drugs. In Virginia State Board of Pharmacy (1976), the Court struck down a Virginia law that prohibited the advertising of prices for prescription drugs, and held that the First Amendment protects the right to receive information as well as the right to speak. Responding to the claim that advertising bans improved the public image of pharmacists, Justice Blackmun wrote that “an alternative [exists] to this highly paternalistic approach . . . people will perceive their own best interests if only they are well enough informed, and the best means to that end is to open the channels of communication rather than to close them” (425 U.S. 748, at 770). In support of its change in direction, the Court asserted two main arguments: (1) truthful advertising coveys information that consumers need to make informed choices in a free enterprise economy; and (2) such information is indispensable as to how the economic system should be regulated or governed. In Central Hudson Gas & Electric (1980), the Court refined its approach and laid out a four-prong test for “intermediate” scrutiny of restrictions on commercial speech. First, the message content cannot be misleading and must be concerned with a lawful activity or product. Second, the government’s interest in regulating the speech in question must be substantial. Third, the regulation must directly and materially advance that interest. Fourth, the regulation must be no more extensive than necessary to achieve its goal. That is, there must be a “reasonable fit” between means and ends, with the means narrowly tailored to achieve the desired objective. Applying the third and fourth-prongs, in 44 Liquormart (1996) the Court struck down a Rhode Island law that banned retail price advertising of beverage alcohol. In doing so, the Court made clear that the state’s power to ban alcohol entirely did not include the lesser power to restrict advertising. More recently, in Lorillard Tobacco (2001) the Supreme Court invalidated a state regulation on placement of outdoor and in-store tobacco displays. In summary, Central Hudson requires the use of a “balancing” test to examine censorship of commercial speech. The test weighs the government’s obligations toward freedom of expression with its interest in limiting the content of some advertisements. Reasonable constraints on time, place, and manner are tolerated, and false advertising remains illegal.

This article provides a brief economic history of advertising bans, and uses the basic framework contained in the Central Hudson decision. The first section discusses the economics of advertising and addresses the economic effects that might be expected from regulations that prohibit or restrict advertising. Applying the Central Hudson test, the second section reviews the history and empirical evidence on advertising bans for alcohol beverages. The third section reviews bans of cigarette advertising and discusses the regulatory powers that reside with the Federal Trade Commission as the main government agency with the authority to regulate unfair or deceptive advertising claims.

The Economics of Advertising

Judged by the magnitude of exposures and expenditures, advertising is a vital and important activity. A rule of thumb in the advertising industry is that the average American is exposed to more than 1,000 advertising messages every day, but actively notices fewer than 80 ads. According to Advertising Age (http://www.adage.com), advertising expenditures in 2002 in all media totaled $237 billion, including $115 billion in 13 measured media. Ads in newspapers accounted for 19.2% of measured spending, followed by network TV (17.3%), magazines (15.6%), spot TV (14.0%), yellow pages (11.9%), and cable/syndicated TV (11.9%). Internet advertising now accounts for about 5.0% of spending. By product category, automobile producers were the largest advertisers ($16 billion of measured media), followed by retailing ($13.5 billion), movies and media ($6 billion), and food, beverages, and candies ($6.0 billion). Beverage alcohol producers ranked 17th ($1.7 billion) and tobacco producers ranked 23rd ($284 million). Among the top 100 advertisers, Anheuser-Busch occupied the 38th spot and Altria Group (which includes Philip Morris) ranked 17th. Total advertising expenditures in 2002 were about 2.3% of U.S. gross domestic product (GDP). Ad spending tends to vary directly with general economy activity as illustrated by spending reductions during the 2000-2001 recession (Wall Street Journal, Aug. 14, 2001; Nov. 28, 2001; Dec. 12, 2001; Apr. 25, 2002). This pro-cyclical feature is contrary to Galbraith’s view that business firms use advertising to control or manage aggregate consumer demand.

National advertising of branded products developed in the early 1900s as increased urbanization and improvements in communication, transportation, and packaging permitted the development of mass markets for branded products (Chandler 1977). In 1900, the advertising-to-GDP ratio was about 3.1% (Simon 1970). The ratio stayed around 3% until 1929, but declined to 2% during the 1930s and has fluctuated around that value since then. The growth of major national industries was associated with increased promotion, although other economic changes often preceded the use of mass media advertising. For example, refrigeration of railroad cars in the late 1870s resulted in national advertising by meat packers in the 1890s (Pope 1983). Around the turn-of-the-century, Sears Roebuck and Montgomery Ward utilized low-cost transportation and mail-order catalogs to develop efficient systems of national distribution of necessities. By 1920 more Americans were living in urban areas than in rural areas. The location of retailers began to change, with a shift first to downtown shopping districts and later to suburban shopping malls. Commercial radio began in 1922, and advertising expenditures grew from $113 million in 1935 to $625 million in 1952. Commercial television was introduced in 1941, but wartime delayed the diffusion of televison. By 1954, half of the households in the U.S. had at least one television set. Expenditures on TV advertising grew rapidly from $454 million in 1952 to $2.5 billion in 1965 (Backman 1968). These changes affected the development of markets — for instance, new products could be introduced more rapidly and the available range of products was enhanced (Borden 1942).

Market Failure: Incomplete and Asymmetric Information

Because it is costly to acquire and process, the information held by buyers and sellers is necessarily incomplete and possibly unequal as well. However, full or “perfect” information is one of the analytical requirements for the proper functioning of competitive markets — so what happens when information is imperfect or unequal? Suppose, for example, that firms charge different prices for identical products, but some consumers (tourists) are ignorant of the dispersion of prices available in the marketplace. For many years, this question was largely ignored by economists, but two contributions sparked a revolution in economic thinking. Stigler (1961) showed that because information is costly to acquire, consumer search for lower prices will be less than complete. As a result, a dispersion of prices can persist and the “law of one price” is violated. The dispersion will be less if the product represents a large expenditure (e.g., autos), since more individual search is supported and suppliers have an extra incentive to promote the product. Because information has public good characteristics, imperfect information provides a rationale for government intervention, but profit-seeking firms also have reasons to reduce search costs through advertising and brand names. Akerlof (1970) took the analysis a step further by focusing on material aspects of a product that are known to the seller, but not by potential buyers. In Akerlof’s “lemons model,” the seller of a used car has private knowledge of defects, but potential buyers have difficulty distinguishing between good used cars (“creampuffs”) and bad used cars (“lemons”). Under these circumstances, Akerlof showed that a market may not exist or only lower-quality products are offered for sale. Hence, asymmetric information can result in market failure, but a reputation for quality can reduce the uncertainty that consumers face due to hidden defects (Akerlof 1970; Richardson 2000; Stigler 1961).

Under some conditions, branding and advertising of products, including targeting of customer groups, can help reduce market imperfections. Because advertising has several purposes or functions, there is always uncertainty regarding its effects. First, advertising may help inform consumers of the existence of products and brands, better inform them about price and quality dimensions, or better match customers and brands (Nelson 1975). Indeed, the basic message in many advertisements is simply that the brand is available. Consumer valuations can reflect a joint product, which is the product itself and the information about it. However, advertising tends to focus on only the positive aspects of a product, and ignores the negatives. In various ways, advertisers sometimes inform consumers that their brand is “less bad” (Calfee 1997b). An advertisement that announces a particular automobile is more crash resistant also is a reminder that all cars are less than perfectly safe. Second, persuasive or “combative” advertising can serve to differentiate one firm’s brand from those of its rivals. As a consequence, a successful advertiser may gain some discretion over the price it charges (“market power”). Furthermore, reactions by rivals may drive industry advertising to excessive levels or beyond the point where net social benefits of advertising are maximized. In other words, excessive advertising may result from the inability of each firm to reduce advertising without similar reductions by its rivals. Because it illustrates a breakdown of desirable coordination, this outcome is an example of the “prisoners’ dilemma game.” Third, the costs of advertising and promotion by existing or incumbent firms can make it more difficult for new firms to enter a market and compete successfully due to an advertising-cost barrier to entry. Investments in customer loyalty or intangible brand equity are largely sunk costs. Smaller incumbents also may be at a disadvantage relative to their larger rivals, and consequently face a “barrier to mobility” within the industry. However, banning advertising can have much the same effect by making it more difficult for smaller firms and entrants to inform customers of the existence of their brands and products. For example, Russian cigarette producers were successful in banning television advertising by new western rivals. Given multiple effects, systematic empirical evidence is needed to help resolve the uncertainties regarding the effects of advertising (Bagwell 2005).

Substantial empirical evidence demonstrates that advertising of prices increases competition and lowers the average market price and variance of prices. Conversely, banning price advertising can have the opposite effect, but consumers might derive information from other sources — such as direct observation and word-of-mouth — or firms can compete more on quality (Kwoka 1984). Bans of price advertising also affect product quality indirectly by making it difficult to inform consumers of price-quality tradeoffs. Products for which empirical evidence demonstrates that advertising reduces the average price include toys, drugs, eyeglasses, optometric services, gasoline, and grocery products. Thus, for relatively homogeneous goods, banning price advertising is expected to increase average prices and make entry more difficult. A partial offset occurs if significant costs of advertising increases product prices.

The effects of a ban of persuasive advertising also are uncertain. In a differentiated product industry, it is possible that advertising expenditures are so large that an advertising ban reduces costs and product prices, thereby offsetting or defeating the purpose of the ban. For products that are well known to consumers (“mature” products), the presumption is that advertising primarily affects brand shares and has little impact on primary demand (Dekimpe and Hanssens 1995; Scherer and Ross 1990). Advertising bans tend to solidify market shares. Furthermore, most advertising bans are less than complete, such as the ban of broadcast advertising of cigarettes. Producers can substitute other media or use other forms of promotion, such as discount coupons, articles of apparel, and event sponsorship. Thus, government limitations on commercial speech for one product or media often lead to additional efforts to limit other promotions. This “slippery slope” effect is illustrated by the Federal Communications Commission’s fairness doctrine for advertising of cigarettes (discussed below).

The Industry Advertising-Sales Response Function

The effect of a given ban on market demand depends importantly on the nature of the relationship between advertising expenditures and aggregate sales. This relationship is referred to as the industry advertising-sales response function. Two questions regarding this function have been debated. First, it is not clear that a well-defined function exists at the industry level, since persuasive advertising primarily affects brand shares. The issue is the spillover, if any, from brand advertising to aggregate (primary) market demand. Two studies of successful brand advertising in the alcohol industry failed to reveal a spillover effect on market demand (Gius 1996; Nelson 2001). Second, if an industry-level response function exists, it should be subject to diminishing marginal returns, but it is unclear where diminishing returns begin (the inflection point) or the magnitude of this effect. Some analysts argue that diminishing returns only begin at high levels of industry advertising, and sharply increasing returns exist at moderate to low levels (Saffer 1993). According to this view, comprehensive bans of advertising will reduce market demand importantly. However, this argument is at odds with empirical evidence for a variety of mature products, which demonstrates diminishing returns over a broad range of outlays (Assmus et al. 1984; Tellis 2004). Simon and Arndt (1980) found that diminishing returns began immediately for a majority of 100-plus products. Furthermore, average advertising elasticities for most mature products are only about 0.1 in magnitude (Sethuraman and Tellis 1991). As a result, limited bans of advertising will not reduce sales of mature products or the effect is likely to be extremely small in magnitude. It is unlikely that elasticities this small could support the third prong of the Central Hudson test.

Suppose that advertising for a particular product convinces some consumers to use Brand X, and this results in more sales of the brand at a higher price. Are consumers better or worse off as a consequence? A shift in consumer preferences toward a fortified brand of breakfast cereal might be described as either a “shift in tastes,” an increase in demand for nutrition, or an increase in joint demand for the cereal and information. Because it concerns individual utility, it is not clear whether a “shift in tastes” reduces or increases consumer satisfaction. Social commentators usually respond that consumers just think they are better off or the demand effect is spurious in nature. Much of the social criticism of advertising is concerned with its pernicious effect on consumer beliefs, tastes, and desires. Vance Packard’s, The Hidden Persuaders (1957), was an early, but possibly misguided, effort along these lines (Rogers 1992). Packard wrote that advertisers can “channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences.” Of course, once a “hidden secret” is revealed, such manipulation is less effective in the marketplace for products due to cynicism toward advertisers or outright rejection of the advertising claims.

Dixit and Norman (1978) argued that because profit-maximizing firms tend to over-advertise, small decreases in advertising will raise consumer welfare. In their analysis, this result holds regardless of the change in tastes or what product features are being advertised. Becker and Murphy (1993) responded that advertising is usually a complement to products, so it is unclear that equilibrium prices will always be higher as advertising increases. Further, it does not follow that social welfare is higher without any advertising. Targeting by advertisers also helps to increase the efficiency of advertising and reduces the tendency to waste advertising dollars on uninterested consumers through redundant ads. Nevertheless, this common practice also is criticized by social commentators and regulatory agencies. In summary, the evaluation of advertising bans requires empirical evidence. Much of the evidence on advertising bans is econometric and most of it concerns two products, alcohol beverages and cigarettes.

Advertising Bans: Beverage Alcohol

In an interesting way, the history of alcohol consumption follows the laws of supply and demand. The consumption of ethyl alcohol as a beverage began some 10,000 years ago. Due to the uncertainties of contaminated water supplies in the West, alcohol is believed to have been the most popular and safe daily beverage for centuries (Valle 1998). In the East, boiled water in the form of teas solved the problem of potable beverages. Throughout the Middle Ages, beer and ale were drunk by common folk and wine by the affluent. Following the decline of the Roman Empire, the Catholic Church entered the profitable production of wines. Distillation of alcohol was developed in the Arab world in 700 A.D. and gradually spread to Europe, where distilled spirits were used ineffectively as a cure for plague in the 14th century. During the 17th century, several non-alcohol beverages became popular, including coffee, tea, and cocoa. In the late eighteenth century, religious sentiment turned against alcohol and temperance activity figured prominently in the concerns of the Baptist, Friends, Methodist, Mormon, Presbyterian, and Unitarian churches. It was not until the late nineteenth century that filtration and treatment made safe drinking water supplies more widely available.

During the colonial period, retail alcohol sellers were licensed by states, local courts, or town councils (Byse 1940). Some colonies fixed the number of licenses or bonded the retailer. Fixing of maximum prices by legislatures and the courts encouraged adulteration and misbranding by retailers. In 1829, the state of Maine passed the first local option law and in 1844, the territory of Oregon enacted a general prohibition law. Experimentation with statewide monopoly of the retail sale of alcohol began in 1893 in South Carolina. As early as 1897, federal regulation of labeling was enacted through the Bottling in Bond Act. Following the repeal of Prohibition in 1933, the Federal Alcohol Control Administration was created by executive order (O’Neill 1940). The Administration immediately set about creating “fair trade codes” that governed false and misleading advertising, unfair trade practices, and prices that were “oppressively high or destructively low.” These codes discouraged price and advertising competition, and encouraged shipping expansion by the major midwestern brewers (McGahan 1991). The Administration ceased to function in 1935 when the National Industrial Recovery Act was declared unconstitutional. The passage of Federal Alcohol Administration Act in 1935 created the Federal Alcohol Administration (FAA) within the Treasury Department, which regulated trade practices and enforced the producer permit system required by the Act. In 1939, the FAA was abolished and its duties were transferred to the Alcohol Tax Unit of the Internal Revenue Service (later named the Bureau of Alcohol, Tobacco, and Firearms). The ATF presently administers a broad range of provisions regarding the formulation, labeling, and advertising of alcohol beverages.

Alcohol Advertising: Analytical Methods

Three types of econometric studies examine the effects of advertising on the market demand for beverage alcohol. First, time-series studies examine the relationship between alcohol consumption and annual or quarterly advertising expenditures. Recent examples of such studies include Calfee and Scheraga (1994), Coulson et al. (2001), Duffy (1995, 2001), Lariviere et al. (2000), Lee and Tremblay (1992), and Nelson (1999). All of these studies find that advertising has no effect on total alcohol consumption and small or nonexistent effects on beverage demand (Nelson 2001). This result is not affected by disaggregating advertising to account for different effects by media (Nelson 1999). Second, cross-sectional and panel studies examine the relationship between alcohol consumption and state regulations, such as state bans of billboards. Panel studies combine cross-sectional (e.g., all 50 states) and time-series information (50 states for the period 1980-2000), which alters the amount of variation in the data. Third, cross-national studies examine the relationship between alcohol consumption and advertising bans for a panel of countries. This essay discusses results obtained in the second and third types of studies.

Background: State Regulation of Billboard Advertising

In the United States, the distribution and retail sale of alcohol beverages is regulated by the individual states. The Twenty-First Amendment, passed in 1933, repealed Prohibition and granted the states legal powers over the sale of alcohol, thereby resolving the conflicting interests of “wets” and “drys” (Goff and Anderson 1994; Munger and Schaller 1997; Shipman 1940; Strumpf and Oberholzer-Gee 2000). As a result, alcohol laws vary importantly by state, and these differences represent a natural experiment with regard to the economic effects of regulation. Long-standing differences in state laws potentially affect the organization of the industry and alcohol demand, reflecting incentives that alter or shape individual behaviors. State laws also differ by beverage, suggesting that substitution among beverages is one possible consequence of regulation. For example, state laws for distilled spirits typically are more stringent than similar laws applied to beer and wine. While each state has adopted its own unique regulatory system, several broad categories can be identified. Following repeal, eighteen states adopted public monopoly control of the distribution of distilled spirits. Thirteen of these states operate off-premise retail stores for the sale of spirits, and two states also control retail sales of table wine. In five states, only the wholesale distribution of distilled spirits is controlled. No state has monopolized beer sales, but laws in three states provide for restrictions on private beer sales by alcohol content. In the private license states, an Alcohol Beverage Control (ABC) agency determines the number and type of retail licenses, subject to local wet-dry options. Because monopoly states have broad authority to restrict the marketing of alcohol, the presumption is that total alcohol consumption will be lower in the control states compared to the license states. Monopoly control also raises search costs by restricting outlet numbers, hours of operation, and product variety. Because beer and wine are substitutes or complements for spirits, state monopoly control can increase or decrease total alcohol use, or the net effect may be zero (Benson et al. 1997; Nelson 1990, 2003a).

A second broad experiment includes state regulations banning advertising of alcohol beverages or which restrict the advertising of prices. Following repeal, fourteen states banned billboard advertising of distilled spirits, including seven of the license states. Because the bans have been in existence for many years and change infrequently over time, these regulations provide evidence on the long-term effectiveness of advertising bans. It is often argued that billboards have an important effect on youth behaviors, and this belief has been a basis for municipal ordinances banning billboard advertising of tobacco and alcohol. Given long-standing bans, it might be expected that youth alcohol behaviors will show up as cross-state differences in adult per capita consumption. Indeed, these two variables are highly correlated (Cook and Moore 2000, 2001). Further, fifteen states banned price advertising by retailers using billboards, newspapers, and visible store displays. In general, a ban of price advertising reduces retail competition and increases search costs of consumers. However, these regulations were not intended to advance temperance, but rather were anti-competitive measures obtained by alcohol retailers (McGahan 1995). For example, in 44 Liquormart (1996) the lower court noted that Rhode Island’s ban of price advertising was designed to protect smaller retailers from in-state and out-of-state competition, and was closely monitored by the liquor retailers association. A price advertising ban could reduce alcohol consumption by elevating full prices (search costs plus monetary prices). Because many states banned only price advertising of spirits, substitution among beverages also is a possible outcome.

Table 1 illustrates historical changes since 1935 in alcohol consumption in the United States and three individual states. Also, Table 1 shows nominal and real advertising expenditures for the U.S. After peaking in the early 1980s, per capita alcohol consumption is now at roughly the level experienced in the early 1960s. Nationally, the decline in alcohol consumption from 1980 to 2000 was 21.0%. This decline has occurred despite continued high levels of advertising and promotion. At the state-level, the percentage changes in consumption are Illinois, -25.3%; Ohio, -15.5%; and Pennsylvania, -20.5%. Pennsylvania is a state monopoly for spirits and wines and also banned price advertising of alcohol, including beer, prior to 1997. However, the change in per capita consumption in Pennsylvania parallels what has occurred nationally.

Econometric Results: State-Level Studies of Billboard Bans

Seven econometric studies estimate the relationship between state billboard bans and alcohol consumption: Hoadley et al. (1984), Nelson (1990, 2003a), Ornstein and Hanssens (1985), Schweitzer et al. (1983), and Wilkinson (1985, 1987). Two studies used a single year, but the other five employed panel data covering five to 25 years. Two studies estimated demand functions for beer or distilled spirits only, which ignores substitution. None of the studies obtained a statistically significant reduction in total alcohol consumption due to bans of billboards. In several studies, billboard bans increased spirits consumption significantly. A positive effect of a ban is contrary to general expectations, but consistent with various forms of substitution. The study by Nelson (2003a) covered 45 states for the time period 1982-1997. In contrast to earlier studies, Nelson (2003a) focused on substitution among alcohol beverages and the resulting net effect on total ethanol consumption. Several subsamples were examined, including all 45 states, ABC-license states, and two time periods, 1982-1988 and 1989-1997. A number of other variables also were considered, including prices, income, tourism, age demographics, and the minimum drinking age. During both time periods, state billboard bans increased consumption of wine and spirits, and reduced consumption of beer. The net effect on total ethanol consumption was significantly positive during 1982-1988, and insignificant thereafter. During both time periods, bans of price advertising of spirits were associated with lower consumption of spirits, higher consumption of beer, and no effect on wine or total alcohol consumption. The results in this study demonstrate that advertising regulations have different effects by beverage, indicating the importance of substitution. Public policy statements that suggest that limited bans have a singular effect are ignoring market realities. The empirical results in Nelson (2003a) and other studies are consistent with the historic use of billboard bans as a device to suppress competition, with little or no effect on temperance.

Econometric Results: Cross-National Studies of Broadcast Bans

Many Western nations have restrictions on radio and television advertising of alcohol beverages, especially distilled spirits. These controls range from time-of-day restrictions and content guidelines to outright bans of broadcast advertising of all alcohol beverages. Until quite recently, the trend in most countries has been toward stricter rather than more lenient controls. Following repeal, U.S. producers of distilled spirits adopted a voluntary Code of Good Practice that barred radio advertising after 1936 and television advertising after 1948. When this voluntary agreement ended in late 1996, cable television stations began carrying ads for distilled spirits. The major TV networks continued to refuse such commercials. Voluntary or self-regulatory codes also have existed in a number of other countries, including Australia, Belgium, Germany, Italy, and Netherlands. By the end of the 1980s, a number of countries had banned broadcast advertising of spirits, including Austria, Canada, Denmark, Finland, France, Ireland, Norway, Spain, Sweden, and United Kingdom (Brewers Association of Canada 1997).

Table 1
Advertising and Alcohol Consumption (gallons of ethanol per capita, 14+ yrs)

Illinois Ohio Pennsylvania U.S. Alcohol Ads Real Ads Real Ads Percent
Year (gal. p.c.) (gal. p.c.) (gal. p.c.) (gal. p.c.) (mil. $) (mil. 96$) per capita Broadcast
1935 1.20
1940 1.56
1945 2.25
1950 2.04
1955 2.00
1960 2.07
1965 2.27 242.2 1018.5 7.50 38.7
1970 2.82 2.22 2.28 2.52 278.4 958.0 6.41 34.7
1975 2.99 2.21 2.35 2.69 395.6 979.9 5.99 44.0
1980 3.00 2.33 2.39 2.76 906.9 1580.5 8.83 55.1
1981 2.91 2.25 2.37 2.76 1014.9 1618.7 8.91 56.6
1982 2.83 2.28 2.36 2.72 1108.7 1667.0 9.07 58.1
1983 2.80 2.22 2.29 2.69 1182.9 1708.4 9.18 62.0
1984 2.77 2.26 2.25 2.65 1284.4 1788.9 9.50 66.0
1985 2.72 2.20 2.22 2.62 1293.0 1746.1 9.16 68.2
1986 2.68 2.17 2.23 2.58 1400.2 1850.6 9.61 73.5
1987 2.66 2.17 2.20 2.54 1374.7 1766.1 9.09 73.5
1988 2.64 2.11 2.11 2.48 1319.4 1639.8 8.37 74.4
1989 2.56 2.07 2.10 2.42 1200.4 1436.6 7.27 68.2
1990 2.62 2.09 2.15 2.45 1050.4 1209.7 6.10 64.8
1991 2.48 2.03 2.05 2.30 1119.5 1247.2 6.22 66.4
1992 2.43 1.98 1.99 2.30 1074.7 1172.0 5.78 68.5
1993 2.38 1.95 1.96 2.23 970.7 1030.9 5.04 70.4
1994 2.35 1.85 1.93 2.18 1000.9 1041.1 5.03 69.4
1995 2.29 1.90 1.86 2.15 1027.5 1046.4 5.00 68.2
1996 2.30 1.93 1.86 2.16 1008.8 1008.8 4.77 68.5
1997 2.26 1.91 1.84 2.14 1087.0 1069.2 5.01 66.5
1998 2.25 1.97 1.86 2.14 1187.6 1154.6 5.36 66.3
1999 2.27 2.00 1.87 2.16 1242.2 1189.5 5.45 64.2
2000 2.24 1.97 1.90 2.18 1422.6 1330.8 5.89 62.8

Sources: 1965-70 ad data from Adams-Jobson Handbooks; 1975-91 data from Impact; and 1992-2000 data from LNA/Competitive Media. Nominal data deflated by the GDP implicit price deflator (1996 = 100). Alcohol data from National Institute on Alcohol Abuse and Alcoholism, U.S. Apparent Consumption of Alcoholic Beverages (1997) and 2003 supplement. Real advertising per capita is for ages 14+ based on NIAAA and author’s population estimates.

The possible effects of broadcast bans are examined in four studies: Nelson and Young (2001), Saffer (1991), Saffer and Dave (2002), and Young (1993). Because alcohol behavior or “cultural sentiment” varies by country, it is important that the social setting is considered. In particular, the level of alcohol consumption in the wine-drinking countries is substantially greater. In France, Italy, Luxembourg, Portugal, and Spain, alcohol consumption is about one-third greater than average (Nelson and Young 2001). Further, 20 to 25% of consumption in the Scandinavian countries is systematically under-reported due to cross-border purchases, smuggling, and home production. In contrast to other studies, Nelson and Young (2001) accounted for these differences. The study examined alcohol demand and related behaviors in a sample of 17 OECD countries (western Europe, Canada, and the U.S.) for the period 1977 to 1995. Control variables included prices, income, tourism, age demographics, unemployment, and drinking sentiment. The results indicated that bans of broadcast advertising of spirits did not decrease per capita alcohol consumption. During the sample period, five countries adopted broadcast bans of all alcohol beverage advertisements, apart from light beer (Denmark, Finland, France, Norway, Sweden). The regression estimates for complete bans were insignificantly positive. The results indicated that bans of broadcast advertising had no effect on alcohol consumption relative to countries that did not ban broadcast advertising. For the U.S., the cross-country results are consistent with studies of successful brands, studies of billboard bans, and studies of advertising expenditures (Nelson 2001). The results are inconsistent with an advertising-response function with a well-defined inflection point.

Advertising Bans: Cigarettes

Prior to 1920, consumption of tobacco in the U.S. was mainly in the form of cigars, pipe tobacco, chewing tobacco, and snuff. It was not until 1923 that cigarette consumption by weight surpassed that of cigars (Forey et al. 2002). Several early developments contributed to the rise of the cigarette (Borden 1942). First, the Bonsak cigarette-making machine was patented in 1880 and perfected in 1884 by James Duke. Second, the federal excise tax on cigarettes, instituted to help pay for the Civil War, was reduced in 1883 from $1.75 to 50 cents a thousand pieces. Third, during World War I, cigarette consumption by soldiers was encouraged by ease of use and low cost. Fourth, the taboo against public smoking by women began to wane, although participation by women remained substantially below that of men. By 1935, about 50% of men smoked compared to only 20% of women. Fifth, advertising has been credited with expanding the market for lighter-blends of tobacco, although evidence in support of this claim is lacking (Tennant 1950). Some early advertising claims were linked to health, such as a 1928 ad for Lucky Strike that emphasized, “No Throat Irritation — No Cough.” During this time, the FTC banned numerous health claims by de-nicotine products and devices, e.g., 10 FTC 465 (1925).

Cigarette advertising has been especially controversial since the early 1950s, reflecting known health risks associated with smoking and the belief that advertising is a causal factor in smoking behaviors. Warning labels on cigarette packages were first proposed in 1955, following new health reports by the American Cancer Society, the British Medical Research Council, and Reader’s Digest (1952). Regulation of cigarette advertising and marketing, especially by the FTC, increased over the years to include content restrictions (1942, 1950-52); advertising guidelines (1955, 1960, 1966); package warning labels (1965, 1970, 1984); product testing and labeling (1967, 1970); public reporting on advertising trends (1964, 1967, 1981); warning messages in advertisements (1970); and advertising bans (1971, 1998). The history of these regulations is discussed below.

Background: Cigarette Prohibition and Early Health Reports

During the 17th century, several of the northern colonies banned public smoking. In 1638, the Plymouth colony passed a law forbidding smoking in the streets and, in 1798, Boston banned the carrying of a lighted pipe or cigar in public. Beginning around 1850, a number of anti-tobacco groups were formed (U.S. Surgeon General 2000), including the American Anti-Tobacco Society in 1849, American Health and Temperance Association (1878), Anti-Cigarette League (1899), Non-Smokers Protective League (1911), and the Department of Narcotics of the Women’s Christian Temperance Union (1883). The WCTU was a force behind the cigarette prohibition movement in Canada and the U.S. During the Progressive Era, fifteen states passed laws prohibiting the sale of cigarettes to adults and another twenty-one states considered such laws (Alston et al. 2002). North Dakota and Iowa were the first states to adopt smoking bans in 1896 and 1897, respectively. In West Virginia, cigarettes were taxed so heavily that they were de facto prohibited. In 1920, Lucy Page Gaston of the WCTU made a bid for the Republican nomination for president on an anti-tobacco platform. However, the movement waned as the laws were largely unenforceable. By 1928, cigarettes were again legal for sale to adults in every state.

As the popularity of cigarette smoking spread, so too did concerns about its health consequences. As a result, the hazards of smoking have long been common knowledge. A number of physicians took early notice of a tobacco-cancer relationship in their patients. In 1912, Isaac Adler published a book on lung cancer that implicated smoking. In 1928, adverse health effects of smoking were reported in the New England Journal of Medicine. A Scientific American report in 1933 tentatively linked cigarette “tars” to lung cancer. Writing in Science in 1938, Raymond Pearl of Johns Hopkins University demonstrated a statistical relationship between smoking and longevity (Pearl 1938). The addictive properties of nicotine were reported in 1942 in the British medical journal, The Lancet. These and other reports attracted little attention from the popular press, although Reader’s Digest (1924, 1941) was an early crusader against smoking. In 1950, three classic scientific papers appeared that linked smoking and lung cancer. Shortly thereafter, major prospective studies began to appear in 1953-54. At this time, the research findings were more widely reported in the popular press (e.g., Time 1953). In 1957, the Public Health Service accepted a causal relationship between smoking and lung cancer (Burney 1959; Joint Report 1957). Between 1950 and 1963, researchers published more than 3,000 articles on the health effects of smoking.

Cigarette Advertising: Analytical Methods

Given the rising concern about the health effects of smoking, it is not surprising that cigarette advertising would come under fire. The ability of advertising to stimulate primary demand is not debated by public health officials, since in their eyes cigarette advertising is inherently deceptive. The econometric evidence is much less clear. Three methods are used to assess the relationship between cigarette consumption and advertising. First, time-series studies examine the relationship between cigarette consumption and annual or quarterly advertising expenditures. These studies have been reviewed several times, including comprehensive surveys by Cameron (1998), Duffy (1996), Lancaster and Lancaster (2003), and Simonich (1991). Most time-series studies find little or no effect of advertising on primary demand for cigarettes. For example, Duffy (1996) concluded that “advertising restrictions (including bans) have had little or no effect upon aggregate consumption of cigarettes.” A meta-analysis by Andrews and Franke (1991) found that the average elasticity of cigarette consumption with respect to advertising expenditure was only 0.142 during 1964-1970, and declined to -0.007 thereafter. Second, cross-national studies examine the relationship between per capita cigarette consumption and advertising bans for a panel of countries. Third, several time-series studies examine the effects of health scares and the 1971 ban of broadcast advertising. This essay discusses results obtained in the second and third types of econometric studies.

Econometric Results: Cross-National Studies of Broadcast Bans

Systematic tests of the effect of advertising bans are provided by four cross-national panel studies that examine annual per capita cigarette consumption among OECD countries: Laugesen and Meads (1991); Stewart (1993); Saffer and Chaloupka (2000); and Nelson (2003b). Results in the first three studies are less than convincing for several reasons. First, advertising bans might be endogenously determined together with cigarette consumption, but earlier studies treated advertising bans as exogenous. In order to avoid the potential bias associated with endogenous regressors, Nelson (2003b) estimated a structural equation for the enabling legislation that restricts advertising. Second, annual data on cigarette consumption contain pronounced negative trends, and the data series in levels are unlikely to be stationary. Nelson (2003b) tested for unit roots and used consumption growth rates (log first-differences) to obtain stationary data series for a sample of 20 OECD countries. Third, the study also tested for structural change in the smoking-advertising relationship. The motivation was based on the following set of observations: by the mid-1960s the risks associated with smoking were well known and cigarette consumption began to decline in most countries. For example, per capita consumption in the United States increased to an all-time high in 1963 and declined modestly until about 1978. Between 1978 and 1995, cigarette consumption in the U.S. declined on average by -2.44% per year. Further, the decline in consumption was accompanied by reductions in smoking prevalence. In the U.S., male smoking prevalence declined from 52% of the population in 1965 to 33% in 1985 and 27% in 1995 (Forey et al. 2002). Smoking also is increasingly concentrated among individuals with lower incomes or lower levels of education (U.S. Public Health Service 1994). Changes in prevalence suggest that the sample of smokers will not be homogeneous over time, which implies that empirical estimates may not be robust across different time periods.

Nelson (2003b) focused on total cigarettes, defined as the sum of manufactured and hand-rolled cigarettes for 1970-1995. Data on cigarette and tobacco consumption were obtained from International Smoking Statistics (Forey et al. 2002). This comprehensive source includes estimates of sales in OECD countries for manufactured cigarettes, hand-rolled cigarettes, and total consumption by weight of all tobacco products. The data series begin around 1948 and extend to 1995. Regulatory information on advertising bans and health warnings were obtained from Health New Zealand’s International Tobacco Control Database and the World Health Organization’s International Digest of Health Legislation. For each country and year, HNZ reports the media in which cigarette advertising are banned. Nine media are covered, including television, radio, cinema, outdoor, newspapers, magazines, shop ads, sponsorships, and indirect advertising such as brand names on non-tobacco products. Based on these data, three dummy variables were defined: TV-RADIO (= 1 if only television and radio are banned, zero otherwise); MODERATE (= 1 if 3 or 4 media are banned); and STRONG (= 1 if 5 or more media are banned). On average, 4 to 5 media were banned in the 1990s compared to only 1 or 2 in the 1970s. Except for Austria, Japan and Spain, all OECD countries by 1995 had enacted moderate or strong bans of cigarette advertising. In 1995, there were 9 countries in the strong category compared to 5 in 1990, 4 in 1985, and only 3 countries in 1980 and earlier. Additional control variables in the study included prices, income, warning labels, unemployment rates, percent filter cigarettes, and demographics.

The results in Nelson (2003b) indicate that cigarette consumption is determined importantly by prices, income, and exogenous country-specific factors. The dummy variables for advertising bans were never significantly negative. The income elasticity was significantly positive and the price elasticity was significantly negative. The price elasticity estimate of -0.39 is identical to the consensus estimate of -0.4 for aggregate data (Chaloupka and Warner 2000). Beginning about 1985, the decline in smoking prevalence resulted in a shift in price and income elasticities. There also was a change in the political climate favoring additional restrictions on advertising that followed rather than caused reductions in smoking and smoking prevalence, which is “reverse causality.” Thus, advertising bans had no demonstrated influence on cigarette demand in the OECD countries, including the U.S. The advertising-response model that motivates past studies is not supported by these results. Data and estimation procedures used in three previous studies are picking-up the substantial declines in consumption that began in the late-1970s, which were unrelated to major changes in advertising restrictions.

Background: Regulation of Cigarettes by the Federal Trade Commission

At the urging of President Wilson, the Federal Trade Commission (FTC) was created by Congress in 1914. The Commission was given the broad mandate to prevent “unfair methods of competition.” From the very beginning, this mandate was interpreted to include false and deceptive advertising, even though advertising per se was not an antitrust issue. Indeed, the first cease-and-desist order issued by the FTC concerned false advertising, 1 FTC 13 (1916). It was the age of the patent medicines and health-claims devices. As early as 1925, FTC orders against false and misleading advertising constituted 75 percent of all orders issued each year. However, in Raladam (1931) the Supreme Court held that false advertising could be prevented only in situations where injury to a competitor could be demonstrated. The Wheeler-Lea Act of 1938 added a prohibition of “unfair or deceptive acts or practices” in or affecting commerce. This amendment broadened Section 5 of the FTC Act to include consumer interests as well as business concerns. The FTC could thereafter proceed against unfair and deceptive methods without regard to alleged effects on competitors.

As an independent regulatory agency, the FTC has rulemaking and adjudicatory authorities (Fritschler and Hoefler 1996). Its rulemaking powers are quasi-legislative, including the authority to hold hearings and trade practice conferences, subpoena witnesses, conduct investigations, and issue industry guidelines and proposals for legislation. Its adjudicatory powers are quasi-judicial, including the authority to issue cease-and-desist orders, consent decrees, injunctions, trade regulation rules, affirmative disclosure and substantiation orders, corrective advertising orders, and advisory opinions. Administrative complaints are adjudicated before an administrative law judge in trial-like proceedings. Rulemaking by the FTC is characterized by broad applicability to all firms in an industry, whereas judicial policy is based on a single case and affects directly only those named in the suit. Of course, once a precedent is established, it may affect other firms in the same situation. Lacking a well-defined constituency, except possibly small business, the FTC’s use of its manifest powers has always been controversial (Clarkson and Muris 1981; Hasin 1987; Miller 1989; Posner 1969, 1973; Stone 1977).

Beginning in 1938, the FTC used its authority to issue “unfair and deceptive” advertising complaints against the major cigarette companies. These actions, known collectively as the “health claims cases,” resulted in consent decrees or cease-and-desist orders involving several major brands during the 1940s and early 1950s. As several cases neared the final judgment phase, in September 1954 the FTC sent a letter to all companies proposing a seven-point list of advertising standards in light of “scientific developments with regard to the [health] effects of cigarette smoking.” A year later, the FTC issued its Cigarette Advertising Guides, which forbade any reference to physical effects of smoking and representations that a brand of cigarette is low in nicotine or tars that “has not been established by a competent scientific proof.” Following several articles in Reader’s Digest, cigarette advertising in 1957-1959 shifted to emphasis on tar and nicotine reduction during the “tar derby.” The FTC initially tolerated these ads if based on tests conducted by Reader’s Digest or Consumer Reports. In 1958, the FTC hosted a two-day conference on tar and nicotine testing, and in 1960 it negotiated a trade practice agreement that “all representations of low or reduced tar or nicotine, whether by filtration or otherwise, will be construed as health claims.” This action was blamed for halting a trend toward increased consumption of lower-tar cigarettes (Calfee 1997a; Neuberger 1963). The FTC vacated this agreement in 1966 when it informed the companies that it would no longer consider advertising that contained “a factual statement of tar and nicotine content” a violation of its Advertising Guides.

On January 11, 1964, the Surgeon General’s Advisory Committee on Smoking and Health issued its famous report on Smoking and Health (U.S. Surgeon General 1964). One week after the report’s release, the FTC initiated proceedings “for promulgation of trade regulation rules regarding unfair and deceptive acts or practices in the advertising and labeling of cigarettes” (notice, 29 Fed Reg 530, January 22, 1964; final rule, 29 Fed Reg 8325, July 2, 1964). The proposed Rule required that all cigarette packages and advertisements disclose prominently the statement, “Caution: Cigarette smoking is dangerous to health [and] may cause death from cancer and other diseases.” Failure to include the warning would be regarded as a violation of the FTC Act. The industry challenged the Rule on grounds that the FTC lacked the statutory authority to issue industry-wide trade rules, absent congressional guidance. The major companies also established their own Cigarette Advertising Code, which prohibited advertising aimed at minors, health-related claims, and celebrity endorsements.

The FTC’s Rule resulted in several congressional bills that culminated in the Federal Cigarette Labeling and Advertising Act of 1965 (P.L. 89-92, effective Jan. 1, 1966). The Labeling Act required each cigarette package to contain the statement, “Caution: Cigarette Smoking May Be Hazardous to Your Health.” According to the Act’s declaration of policy, the warnings were required so that “the public may be adequately informed that cigarette smoking may be hazardous to the health.” The Act also required the FTC to report annually to Congress concerning (a) the effectiveness of cigarette labeling, (b) current practices and methods of cigarette advertising and promotion, and (c) such recommendations for legislation as it may deem appropriate. Beginning in 1967, the FTC commenced its annual reporting to Congress on advertising of cigarettes. It recommended that health warning be extended to advertising and strengthened to conform to its original proposal, and it called for research on less-hazardous cigarettes. These recommendations were repeated in 1968 and 1969, and a recommendation was added that advertising on television and radio should be banned.

Several other important regulatory actions also took place in 1967-1970. First, the FTC established a laboratory to conduct standardized testing of tar and nicotine content for each brand. In November 1967, the FTC commenced public reporting of tar and nicotine levels by brand, together with reports of overall trends in smoking behaviors. Second, in June of 1967, the Federal Communications Commission (FCC) ruled that the “fairness doctrine” was applicable to cigarette advertising, which resulted in numerous free anti-smoking commercials by the American Cancer Society and other groups during July 1967 to December 1970.2 Third, in early 1969 the FCC issued a notice of proposed rulemaking to ban broadcast advertising of cigarettes (34 Fed Reg 1959, Feb. 11, 1969). The proposal was endorsed by the Television Code Review Board of the National Association of Broadcasters, and its enactment was anticipated by some industry observers. Following the FCC’s proposal, the FTC issued a notice of proposed rulemaking (34 Fed Reg 7917, May 20, 1969) to require more forceful statements on packages and extend the warnings to all advertising as a modification of its 1964 Rule in the “absence of contrary congressional direction.” Congress again superseded the FTC’s actions, and passed the Public Health Smoking Act of 1969 (P.L. 91-222, effective Nov. 1, 1970), which banned broadcast advertising after January 1, 1971 and modified the package label to read, “Warning: The Surgeon General Has Determined that Cigarette Smoking Is Dangerous to Your Health.” In 1970, the FTC negotiated agreements with the major companies to (1) disclose tar and nicotine levels in cigarette advertising using the FTC Test Method, and (2) include the health warning in advertising. By 1972, the FTC believed that it had achieved the recommendations in its initial reports to Congress.3

In summary, the FTC has engaged in continuous surveillance of cigarette advertising and marketing practices. Industry-wide regulation began in the early 1940s. As a result, the advertising of cigarettes in the U.S. is more restricted than other lawful consumer products. Some regulations are primarily informational (warning labels), while others affect advertising levels directly (broadcast ban). During a six-decade period, the FTC regulated the overall direction of cigarette marketing, including advertising content and placement, warning labels, and product development. Through its testing program, it has influenced the types of cigarettes produced and consumed. The FTC engaged in continuous monitoring of cigarette advertising practices and prepared in-depth reports on these practices; it held hearings on cigarette testing, advertising, and labeling; and it issued consumer advisories on smoking. Directly or indirectly, the FTC has initiated or influenced promotional and product developments in the cigarette industry. However, it remains to be shown that these actions had an important or noticeable effect on cigarette consumption and/or industry advertising expenditures. Is there empirical evidence that federal regulation has affected aggregate cigarette consumption or advertising? If the answer is negative or the effects are limited in magnitude, it suggests that the Congressional and FTC actions after 1964 did not add materially to information already in the marketplace or these actions were otherwise misguided.4

Table 2 displays information on smoking prevalence, cigarette consumption, and advertising. Smoking prevalence has declined considerably compared to the 1950s and 1960s. Consumption per capita reached an all-time high in 1963 (4,345 cigarettes per capita) and began a steep decline around 1978. By 1985, consumption was below the level experienced in 1947. Cigarette promotion has changed greatly over the years as producers substituted away from traditional advertising media. As reported by the FTC, the category of non-price promotions includes expenditures on point-of-sale displays, promotional allowances, samples, specialty items, public entertainment, direct mail, endorsements and testimonials, internet, and audio-visual ads. The shift away from media advertising reflects the broadcast and billboard bans as well as the controversies that surround advertising of cigarettes. As a result, spending on traditional media now amounts to only $356 million, or about 7% of the total marketing outlay of $5.0 billion. Clearly, regulation has affected the type of promotion, but not the overall expenditure.

Econometric Results: U.S. Time-Series Studies of the 1971 Advertising Ban

Several econometric studies examine the effects of the 1971 broadcast ban on cigarette demand, including Franke (1994), Gallet (1999), Ippolito et al. (1979), Kao and Tremblay (1988), and Simonich (1991). None of these studies found that the 1971 broadcast ban had a noticeable effect on cigarette demand. The studies by Franke and Simonich employed quarterly data on cigarette sales. The study by Ippolito et al. covered an extended time period from 1926 to 1975. The studies by Gallet and Kao and Tremblay employed simultaneous-equations methods, but each study concluded that the broadcast advertising ban did not have a significant effect on cigarette demand. Although health reports in 1953 and 1964 may have reduced the demand for tobacco, the results do not support a negative effect of the 1971 Congressional broadcast ban. By 1964 or earlier, the adverse effects of smoking appear to have been incorporated in consumers’ decisions regarding smoking. Hence, the advertising restrictions did not contribute to consumer information and therefore did not affect cigarette consumption.

Conclusions

The First Amendment protects commercial speech, although the degree of protection afforded is less than political speech. Commercial speech jurisprudence has changed profoundly since Congress passed a flat ban on broadcast advertising of cigarettes in 1971. The courts have recognized the vital need for consumers to be informed about market conditions — an environment that is conducive to operation of competitive markets. The Central Hudson test requires the courts and agencies to balance the benefits and costs of censorship. The third-prong of the test requires that censorship must directly and materially advance a substantial goal. This essay has discussed the difficulty of establishing a material effect of limited and comprehensive bans of alcohol and cigarette advertisements.

Sales per cap. 5-media Non-Price Total per cap.

Table 2
Advertising and Cigarette Consumption

Prevalence: Total Cig Sales Cigs
per cap.
Ad Spending:
5-media
Promotion:
Non-Price
Real Total Real Total
per cap.
Male Female
Year (%) (%) (bil.) (ages 18+) (mil. $) (mil. $) (mil 96$) (ages 18+)
1920 44.6 665
1925 79.8 1,085
1930 119.3 1,485 26.0 213.1
1935 53 18 134.4 1,564 29.2 286.3
1940 181.9 1,976 25.3 245.6
1947 345.4 3,416 44.1 269.7 2.70
1950 54 33 369.8 3,552 65.5 375.4 3.61
1955 50 24 396.4 3,597 104.6 528.8 4.83
1960 47 27 484.4 4,171 193.1 870.2 7.53
1965 52 34 528.8 4,258 249.9 1050.9 8.49
1970 44 31 536.5 3,985 296.6 64.4 1242.3 9.26
1975 39 29 607.2 4,122 330.8 160.5 1227.3 8.28
1980 38 29 631.5 3,849 790.1 452.2 2177.9 13.29
1985 33 28 594.0 3,370 932.0 1544.4 3360.6 19.09
1986 583.8 3,274 796.3 1586.1 3163.5 17.78
1987 32 27 575.0 3,197 719.2 1861.3 3326.2 18.49
1988 31 26 562.5 3,096 824.5 1576.3 2993.1 16.44
1989 540.0 2,926 868.3 1788.7 3190.8 17.35
1990 28 23 525.0 2,817 835.2 1973.0 3246.1 17.52
1991 28 24 510.0 2,713 772.6 2054.6 3153.2 16.86
1992 28 25 500.0 2,640 621.5 2435.0 3328.1 17.62
1993 28 23 485.0 2,539 542.1 2933.9 3695.9 19.38
1994 28 23 486.0 2,524 545.1 3039.5 3733.6 19.41
1995 27 23 487.0 2,505 564.2 2982.6 3615.5 18.62
1996 487.0 2,482 578.2 3220.8 3799.0 19.37
1997 28 22 480.0 2,423 575.7 3561.4 4058.0 20.47
1998 26 22 465.0 2,320 645.6 3908.0 4412.4 22.03
1999 26 22 435.0 2,136 487.7 4659.0 4918.0 24.29
2000 26 21 430.0 2,092 355.8 5015.0 5043.0 24.53
Sources: Smoking prevalence and cigarette sales from Forey et al (2002) and U.S. Public Health Service (1994). Data on advertising compiled by the author from FTC Reports to Congress (various issues); 1930-1940 data derived from Borden (1942). Nominal data deflated by the GDP implicit price deflator (1996=100). Advertising expenditures include TV, radio, newspapers, magazine, outdoor and transit ads. Promotions exclude price-promotions using discount coupons and retail value-added offers (“buy one, get one free”). Real total includes advertising and non-price promotions.

Law Cases

44 Liquormart, Inc., et al. v. Rhode Island and Rhode Island Liquor Stores Assoc., 517 U.S. 484 (1996).

Central Hudson Gas & Electric Corp. v. Public Service Commission of New York, 447 U.S. 557 (1980).

Federal Trade Commission v. Raladam Co., 283 U.S. 643 (1931).

Food and Drug Administration, et al. v. Brown & Williamson Tobacco Corp., et al., 529 U.S. 120 (2000).

Lorillard Tobacco Co., et al. v. Thomas F. Reilly, Attorney General of Massachusetts, et al., 533 U.S. 525 (2001).

Red Lion Broadcasting Co. Inc., et al. v. Federal Communications Commission, et al., 395 U.S. 367 (1969).

Valentine, Police Commissioner of the City of New York v. Chrestensen, 316 U.S. 52 (1942).

Virginia State Board of Pharmacy, et al. v. Virginia Citizens Consumer Council, Inc., et al., 425 U.S. 748 (1976).

References

Akerlof, George A. “The Market for ‘Lemons': Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84 (1970): 488-500.

Alston, Lee J., Ruth Dupre, and Tomas Nonnenmacher. “Social Reformers and Regulation: The Prohibition of Cigarettes in the U.S. and Canada.” Explorations in Economic History 39 (2002): 425-45.

Andrews, Rick L. and George R. Franke. “The Determinants of Cigarette Consumption: A Meta-Analysis.” Journal of Public Policy & Marketing 10 (1991): 81-100.

Assmus, Gert, John U. Farley, and Donald R. Lehmann. “How Advertising Affects Sales: Meta-Analysis of Econometric Results.” Journal of Marketing Research 21 (1984): 65-74.

Backman, Jules. Advertising and Competition. New York: New York University Press, 1967.

Bagwell, Kyle. “The Economic Analysis of Advertising.” In Handbook of Industrial Organization, vol. 3, edited by M. Armstrong and R. Porter. Amsterdam: North-Holland, forthcoming 2005.

Becker, Gary and Kevin Murphy. “A Simple Theory of Advertising as a Good or Bad,” Quarterly Journal of Economics 108 (1993): 941-64.

Benson, Bruce L., David W. Rasmussen, and Paul R. Zimmerman. “Implicit Taxes Collected by State Liquor Monopolies.” Public Choice 115 (2003): 313-31.

Borden, Neil H. The Economic Effects of Advertising. Chicago: Irwin, 1942.

Brewers Association of Canada. Alcoholic Beverage Taxation and Control Policies: International Survey, 9th ed. Ottawa: BAC, 1997.

Burney, Leroy E. “Smoking and Lung Cancer: A Statement of the Public Health Service.” Journal of the American Medical Association 171 (1959): 135-43.

Byse, Clark. “Alcohol Beverage Control Before Repeal.” Law and Contemporary Problems 7 (1940): 544-69.

Calfee, John E. “The Ghost of Cigarette Advertising Past.” Regulation 20 (1997a): 38-45.

Calfee, John E. Fear of Persuasion: A New Perspective on Advertising and Regulation. LaVergne, TN: AEI, 1997b.

Calfee, John E. and Carl Scheraga. “The Influence of Advertising on Alcohol Consumption: A Literature Review and an Econometric Analysis of Four European Nations.” International Journal of Advertising 13 (1994): 287-310.

Cameron, Sam. “Estimation of the Demand for Cigarettes: A Review of the Literature.” Economic Issues 3 (1998): 51-72.

Chaloupka, Frank J. and Kenneth E. Warner. “The Economics of Smoking.” In The Handbook of Health Economics, vol. 1B, edited by A.J. Culyer and J.P. Newhouse, 1539-1627. New York: Elsevier, 2000.

Chandler, Alfred D. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: Belknap Press, 1977.

Clarkson, Kenneth W. and Timothy J. Muris, eds. The Federal Trade Commission since 1970: Economic Regulation and Bureaucratic Behavior. Cambridge: Cambridge University Press, 1981.

Cook, Philip J. and Michael J. Moore. “Alcohol.” In The Handbook of Health Economics, vol. 1B, edited by A.J. Culyer and J.P. Newhouse, 1629-73. Amsterdam: Elsevier, 2000.

Cook, Philip J. and Michael J. Moore. “Environment and Persistence in Youthful Drinking Patterns.” In Risky Behavior Among Youths: An Economic Analysis, edited by J. Gruber, 375-437. Chicago: University of Chicago Press, 2001.

Coulson, N. Edward, John R. Moran, and Jon P. Nelson. “The Long-Run Demand for Alcoholic Beverages and the Advertising Debate: A Cointegration Analysis.” In Advertising and Differentiated Products, vol. 10, edited by M.R. Baye and J.P. Nelson, 31-54. Amsterdam: JAI Press, 2001.

Dekimpe, Marnick G. and Dominique Hanssens. “Empirical Generalizations about Market Evolution and Stationarity.” Marketing Science 14 (1995): G109-21.

Dixit, Avinash and Victor Norman. “Advertising and Welfare.” Bell Journal of Economics 9 (1978): 1-17.

Duffy, Martyn. “Advertising in Demand Systems for Alcoholic Drinks and Tobacco: A Comparative Study.” Journal of Policy Modeling 17 (1995): 557-77.

Duffy, Martyn. “Econometric Studies of Advertising, Advertising Restrictions and Cigarette Demand: A Survey.” International Journal of Advertising 15 (1996): 1-23.

Duffy, Martyn. “Advertising in Consumer Allocation Models: Choice of Functional Form.” Applied Economics 33 (2001): 437-56.

Federal Trade Commission. Staff Report on the Cigarette Advertising Investigation. Washington, DC: FTC, 1981.

Forey, Barbara, et al., eds. International Smoking Statistics, 2nd ed. London: Oxford University Press, 2002.

Franke, George R. “U.S. Cigarette Demand, 1961-1990: Econometric Issues, Evidence, and Implications.” Journal of Business Research 30 (1994): 33-41.

Fritschler, A. Lee and James M. Hoefler. Smoking and Politics: Policy Making and the Federal Bureaucracy, 5th ed. Upper Saddle River, NJ: Prentice-Hall, 1996.

Gallet, Craig A. “The Effect of the 1971 Advertising Ban on Behavior in the Cigarette Industry.” Managerial and Decision Economics 20 (1999): 299-303.

Gius, Mark P. “Using Panel Data to Determine the Effect of Advertising on Brand-Level Distilled Spirits Sales.” Journal of Studies on Alcohol 57 (1996): 73-76.

Goff, Brian and Gary Anderson. “The Political Economy of Prohibition in the United States, 1919-1933.” Social Science Quarterly 75 (1994): 270-83.

Hasin, Bernice R. Consumers, Commissions, and Congress: Law, Theory and the Federal Trade Commission, 1968-1985. New Brunswick, NJ: Transaction Books, 1987.

Hazlett, Thomas W. “The Fairness Doctrine and the First Amendment.” The Public Interest 96 (1989): 103-16.

Hoadley, John F., Beth C. Fuchs, and Harold D. Holder. “The Effect of Alcohol Beverage Restrictions on Consumption: A 25-year Longitudinal Analysis.” American Journal of Drug and Alcohol Abuse 10 (1984): 375-401.

Ippolito, Richard A., R. Dennis Murphy, and Donald Sant. Staff Report on Consumer Responses to Cigarette Health Information. Washington, DC: Federal Trade Commission, 1979.

Joint Report of the Study Group on Smoking and Health. “Smoking and Health.” Science 125 (1957): 1129-33.

Kao, Kai and Victor J. Tremblay. “Cigarette ‘Health Scare,’ Excise Taxes, and Advertising Ban: Comment.” Southern Economic Journal 54 (1988): 770-76.

Kwoka, John E. “Advertising and the Price and Quality of Optometric Services.” American Economic Review 74 (1984): 211-16.

Lancaster, Kent M. and Alyse R. Lancaster. “The Economics of Tobacco Advertising: Spending, Demand, and the Effects of Bans.” International Journal of Advertising 22 (2003): 41-65.

Lariviere, Eric, Bruno Larue, and Jim Chalfant. “Modeling the Demand for Alcoholic Beverages and Advertising Specifications.” Agricultural Economics 22 (2000): 147-62.

Laugesen, Murray and Chris Meads. “Tobacco Advertising Restrictions, Price, Income and Tobacco Consumption in OECD Countries, 1960-1986.” British Journal of Addiction 86 (1991): 1343-54.

Lee, Byunglak and Victor J. Tremblay. “Advertising and the US Market Demand for Beer.” Applied Economics 24 (1992): 69-76.

McGahan, A.M. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-1958.” Business History Review 65 (1991): 229-84.

McGahan, A.M. “Cooperation in Prices and Advertising: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-59.

Miller, James C. The Economist as Reformer: Revamping the FTC, 1981-1985. Washington, DC: American Enterprise Institute, 1989.

Munger, Michael and Thomas Schaller. “The Prohibition-Repeal Amendments: A Natural Experiment in Interest Group Influence.” Public Choice 90 (1997): 139-63.

Nelson, Jon P. “State Monopolies and Alcoholic Beverage Consumption.” Journal of Regulatory Economics 2 (1990): 83-98.

Nelson, Jon P. “Broadcast Advertising and U.S. Demand for Alcoholic Beverages.” Southern Economic Journal 66 (1999): 774-90.

Nelson, Jon P. “Alcohol Advertising and Advertising Bans: A Survey of Research Methods, Results, and Policy Implications.” In Advertising and Differentiated Products, vol. 10, edited by M.R. Baye and J.P. Nelson, 239-95. Amsterdam: JAI Press, 2001.

Nelson, Jon P. “Advertising Bans, Monopoly, and Alcohol Demand: Testing for Substitution Effects Using State Panel Data.” Review of Industrial Organization 22 (2003a): 1-25.

Nelson, Jon P. “Cigarette Demand, Structural Change, and Advertising Bans: International Evidence, 1970-1995.” Contributions to Economic Analysis & Policy 2 (2003b): 1-28. http://www.bepress.com/bejeap/contributions (electronic journal).

Nelson, Jon P. and Douglas J. Young. “Do Advertising Bans Work? An International Comparison.” International Journal of Advertising 20 (2001): 273-96.

Nelson, Phillip. “The Economic Consequences of Advertising.” Journal of Business 48 (1975): 213-41.

Neuberger, Maurine B. Smoke Screen: Tobacco and the Public Welfare. Englewood Cliffs, NJ: Prentice-Hall, 1963.

O’Neill, John E. “Federal Activity in Alcoholic Beverage Control.” Law and Contemporary Problems 7 (1940): 570-99.

Ornstein, Stanley O. and Dominique M. Hanssens. “Alcohol Control Laws and the Consumption of Distilled Spirits and Beer.” Journal of Consumer Research 12 (1985): 200-13.

Packard, Vance O. The Hidden Persuaders. New York: McKay, 1957.

Pearl, Raymond. “Tobacco Smoking and Longevity.” Science 87 (1938): 216-17.

Pope, Daniel. The Making of Modern Advertising. New York: Basic Books, 1983.

Posner, Richard A. “The Federal Trade Commission.” University of Chicago Law Review 37 (1969): 47-89.

Posner, Richard A. Regulation of Advertising by the FTC. Washington, DC: AEI, 1973.

“Does Tobacco Harm the Human Body?” (by I. Fisher). Reader’s Digest (Nov. 1924): 435. “Nicotine Knockout, or the Slow Count” (by G. Tunney). Reader’s Digest (Dec. 1941): 21. “Cancer by the Carton” (by R. Norr). Reader’s Digest (Dec. 1952): 7.

Richardson, Gary. “Brand Names before the Industrial Revolution.” Unpub. working paper, Department of Economics, University of California at Irvine, 2000.

Rogers, Stuart. “How a Publicity Blitz Created the Myth of Subliminal Advertising.” Public Relations Quarterly 37 (1992): 12-17.

Russell, Wallace A. “Controls Over Labeling and Advertising of Alcoholic Beverages.” Law and ContemporaryProblems 7 (1940): 645-64.

Saffer, Henry. “Alcohol Advertising Bans and Alcohol Abuse: An International Perspective.” Journal of Health Economics 10 (1991): 65-79.

Saffer, Henry. “Advertising under the Influence.” In Economics and the Prevention of Alcohol-Related Problems, edited by M.E. Hilton, 125-40. Washington, DC: National Institute on Alcohol Abuse and Alcoholism, 1993.

Saffer, Henry and Frank Chaloupka. “The Effect of Tobacco Advertising Bans on Tobacco Consumption.” Journal of Health Economics 19 (2000): 1117-37.

Saffer, Henry and Dhaval Dave. “Alcohol Consumption and Alcohol Advertising Bans.” Applied Economics 34 (2002): 1325-34.

Scherer, F. M. and David Ross. Industrial Market Structure and Economic Performance. 3rd ed. Boston: Houghton Mifflin, 1990.

Schweitzer, Stuart O., Michael D. Intriligator, and Hossein Salehi. “Alcoholism.” In Economics and Alcohol: Consumption and Controls, edited by M. Grant, M. Plant, and A. Williams, 107-22. New York: Harwood, 1983.

Sethuraman, Raj and Gerard J. Tellis. “An Analysis of the Tradeoff Between Advertising and Price Discounting.” Journal of Marketing Research 28 (1991): 160-74.

Shipman, George A. “State Administrative Machinery for Liquor Control.” Law and Contemporary Problems 7 (1940): 600-20.

Simmons, Steven J. The Fairness Doctrine and the Media. Berkeley, CA: University of California Press, 1978.

Simon, Julian L. Issues in the Economics of Advertising. Urbana, IL: University of Illinois Press, 1970.

Simon, Julian L. and John Arndt. “The Shape of the Advertising Response Function.” Journal of Advertising Research 20 (1980): 11-28.

Simonich, William L. Government Antismoking Policies. New York: Peter Lang, 1991.

Stewart, Michael J. “The Effect on Tobacco Consumption of Advertising Bans in OECD Countries.” International Journal of Advertising 12 (1993): 155-80.

Stigler, George J. “The Economics of Information.” Journal of Political Economy 69 (1961): 213-25.

Stone, Alan. Economic Regulation and the Public Interest: The Federal Trade Commission in Theory and Practice. Ithaca, NY: Cornell University Press, 1977.

Strumpf, Koleman S. and Felix Oberholzer-Gee. “Local Liquor Control from 1934 to 1970.” In Public Choice Interpretations of American Economic History, edited by J.C. Heckelman, J.C. Moorhouse, and R.M. Whaples, 425-45. Boston: Kluwer Academic, 2000.

Tellis, Gerard J. Effective Advertising: Understanding When, How, and Why Advertising Works. Thousand Oaks, CA: Sage, 2004.

Tennant, Richard B. The American Cigarette Industry. New Haven, CT: Yale University Press, 1950.

“Beyond Any Doubt.” Time (Nov. 30, 1953): 60.

U.S. Congress. Senate. To Prohibit the Advertising of Alcoholic Beverages by Radio. Hearings before the Subcommittee on S. 517. 76th Congress, 1st Session. Washington, DC: U.S. Government Printing Office, 1939.

U.S. Congress. Senate. Liquor Advertising Over Radio and Television. Hearings on S. 2444. 88th Congress, 2nd Session. Washington, DC: U.S. Government Printing Office, 1952.

U.S. Public Health Service. Smoking and Health. Report of the Advisory Committee to the Surgeon General of the Public Health Service. Washington, DC: U.S. Department of Health, Education, and Welfare, 1964.

U.S. Public Health Service. Surveillance for Selected Tobacco-Use Behaviors — United States, 1900-1994. Atlanta: U.S. Department of Health and Human Services, 1994.

U.S. Public Health Service. Reducing Tobacco Use. A Report of the Surgeon General. Atlanta: U.S. Department of Health and Human Services, 2000.

Vallee, Bert L. “Alcohol in the Western World.” Scientific American 278 (1998): 80-85.

Wilkinson, James T. “Alcohol and Accidents: An Economic Approach to Drunk Driving.” Ph.D. diss., Vanderbilt University, 1985.

Wilkinson, James T. “The Effects of Regulation on the Demand for Alcohol.” Unpub. working paper, Department of Economics, University of Missouri, 1987.

Young, Douglas J. “Alcohol Advertising Bans and Alcohol Abuse: Comment.” Journal of Health Economics 12 (1993): 213-28.

Endnotes

1. See, for example, Packer Corp. v. Utah, 285 U.S. 105 (1932); Breard v. Alexandria, 341 U.S. 622 (1951); E.F. Drew v. FTC, 235 F.2d 735 (1956), cert. denied, 352 U.S. 969 (1957).

2. In 1963, the Federal Communications Commission (FCC) notified broadcast stations that they would be required to give “fair coverage” to controversial public issues (40 FCC 571). The Fairness Doctrine ruling was upheld by the Supreme Court in Red Lion Broadcasting (1969). At the request of John Banzhaf, the FCC in 1967 applied the Fairness Doctrine to cigarette advertising (8 FCC 2d 381). The FCC opined that the cigarette advertising was a “unique situation” and extension to other products “would be rare,” but Commissioner Loevinger warned that the FCC would have difficulty distinguishing cigarettes from other products (9 FCC 2d 921). The FCC’s ruling was upheld by the D.C. Circuit Court, which argued that First Amendment rights were not violated because advertising was “marginal speech” (405 F.2d 1082). During the period 1967-70, broadcasters were required to include free antismoking messages as part of their programming. In February 1969, the FCC issued a notice of proposed rulemaking to ban broadcast advertising of cigarettes, absent voluntary action by cigarette producers (16 FCC 2d 284). In December 1969, Congress passed the Smoking Act of 1969, which contained the broadcast ban (effective Jan. 1, 1971). With regard to the Fairness Doctrine, Commissioner Loevinger’s “slippery slope” fears were soon realized. During 1969-1974, the FCC received thousands of petitions for free counter-advertising for diverse products, such as nuclear power, Alaskan oil development, gasoline additives, strip mining, electric power rates, clearcutting of forests, phosphate-based detergents, trash compactors, military recruitment, children’s toys, airbags, snowmobiles, toothpaste tubes, pet food, and the United Way. In 1974, the FCC began an inquiry into the Fairness Doctrine, which concluded that “standard product commercials, such as the old cigarette ads, make no meaningful contribution toward informing the public on any side of an issue . . . the precedent is not at all in keeping with the basic purposes of the fairness doctrine” (48 FCC 2d 1, at 24). After numerous inquires and considerations, the FCC finally announced in 1987 that the Fairness Doctrine had a “chilling effect,” on speech generally, and could no longer be sustained as an effective public policy (2 FCC Rcd 5043). Thus ended the FCC’s experiment with regulatory enforcement of a “right to be heard” (Hazlett 1989; Simmons 1978).

3. During the remainder of the 1970s, the FTC concentrated on enforcement of its advertising regulations. It issued consent orders for unfair and deceptive advertising to force companies to include health warnings “clearly and conspicuously in all cigarette advertising.” It required 260 newspapers and 40 magazines to submit information on cigarette advertisements, and established a task force with the Department of Health, Education and Welfare to determine if newspaper ads were deceptive. In 1976, the FTC announced that it was again investigating “whether there may be deception and unfairness in the advertising and promotion of cigarettes.” It subpoenaed documents from 28 cigarette manufacturers, advertising agencies, and other organizations, including copy tests, consumer surveys, and marketing plans. Five years later, it submitted to Congress the results of this investigation in its Staff Report on Cigarette Investigation (FTC 1981). The report proposed a system of stronger rotating warnings and covered issues that had emerged regarding low-tar cigarettes, including compensatory behaviors by smokers and the adequacy of the FTC’s Test Method for determining tar and nicotine content. In 1984, President Reagan signed the Comprehensive Smoking Education Act (P.L. 98-474, effective Oct.12, 1985), which required four rotating health warnings for packages and advertising. Also, in 1984, the FTC revised its definition of deceptive advertising (103 FTC 110). In 2000, the FTC finally acknowledged the shortcoming of its tar and nicotine test method.

4. The Food and Drug Administration (FDA) has jurisdiction over cigarettes as drugs in cases involving health claims for tobacco, additives, and smoking devices. Under Dr. David Kessler, the FDA in 1996 unsuccessfully attempted to regulate all cigarettes as addictive drugs and impose advertising and other restrictions designed to reduce the appeal and use of tobacco by children (notice, 60 Fed Reg 41313, Aug. 11, 1995; final rule, 61 Fed Reg 44395, Aug. 28, 1996); vacated by FDA v. Brown & Williamson Tobacco Corporation, et al., 529 U.S. 120 (2000)

Citation: Nelson, Jon. “Advertising Bans, US”. EH.Net Encyclopedia, edited by Robert Whaples. May 20, 2004. URL http://eh.net/encyclopedia/nelson-adbans/