EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Economic Impact of the Black Death

David Routt, University of Richmond

The Black Death was the largest demographic disaster in European history. From its arrival in Italy in late 1347 through its clockwise movement across the continent to its petering out in the Russian hinterlands in 1353, the magna pestilencia (great pestilence) killed between seventeen and twenty—eight million people. Its gruesome symptoms and deadliness have fixed the Black Death in popular imagination; moreover, uncovering the disease’s cultural, social, and economic impact has engaged generations of scholars. Despite growing understanding of the Black Death’s effects, definitive assessment of its role as historical watershed remains a work in progress.

A Controversy: What Was the Black Death?

In spite of enduring fascination with the Black Death, even the identity of the disease behind the epidemic remains a point of controversy. Aware that fourteenth—century eyewitnesses described a disease more contagious and deadlier than bubonic plague (Yersinia pestis), the bacillus traditionally associated with the Black Death, dissident scholars in the 1970s and 1980s proposed typhus or anthrax or mixes of typhus, anthrax, or bubonic plague as the culprit. The new millennium brought other challenges to the Black Death—bubonic plague link, such as an unknown and probably unidentifiable bacillus, an Ebola—like haemorrhagic fever or, at the pseudoscientific fringes of academia, a disease of interstellar origin.

Proponents of Black Death as bubonic plague have minimized differences between modern bubonic and the fourteenth—century plague through painstaking analysis of the Black Death’s movement and behavior and by hypothesizing that the fourteenth—century plague was a hypervirulent strain of bubonic plague, yet bubonic plague nonetheless. DNA analysis of human remains from known Black Death cemeteries was intended to eliminate doubt but inability to replicate initially positive results has left uncertainty. New analytical tools used and new evidence marshaled in this lively controversy have enriched understanding of the Black Death while underscoring the elusiveness of certitude regarding phenomena many centuries past.

The Rate and Structure of mortality

The Black Death’s socioeconomic impact stemmed, however, from sudden mortality on a staggering scale, regardless of what bacillus caused it. Assessment of the plague’s economic significance begins with determining the rate of mortality for the initial onslaught in 1347—53 and its frequent recurrences for the balance of the Middle Ages, then unraveling how the plague chose victims according to age, sex, affluence, and place.

Imperfect evidence unfortunately hampers knowing precisely who and how many perished. Many of the Black Death’s contemporary observers, living in an epoch of famine and political, military, and spiritual turmoil, described the plague apocalyptically. A chronicler famously closed his narrative with empty membranes should anyone survive to continue it. Others believed as few as one in ten survived. One writer claimed that only fourteen people were spared in London. Although sober eyewitnesses offered more plausible figures, in light of the medieval preference for narrative dramatic force over numerical veracity, chroniclers’ estimates are considered evidence of the Black Death’s battering of the medieval psyche, not an accurate barometer of its demographic toll.

Even non—narrative and presumably dispassionate, systematic evidence — legal and governmental documents, ecclesiastical records, commercial archives — presents challenges. No medieval scribe dragged his quill across parchment for the demographer’s pleasure and convenience. With a paucity of censuses, estimates of population and tracing of demographic trends have often relied on indirect indicators of demographic change (e.g., activity in the land market, levels of rents and wages, size of peasant holdings) or evidence treating only a segment of the population (e.g., assignment of new priests to vacant churches, payments by peasants to take over holdings of the deceased). Even the rare census—like record, like England’s Domesday Book (1086) or the Poll Tax Return (1377), either enumerates only heads of households or excludes slices of the populace or ignores regions or some combination of all these. To compensate for these imperfections, the demographer relies on potentially debatable assumptions about the size of the medieval household, the representativeness of a discrete group of people, the density of settlement in an undocumented region, the level of tax evasion, and so forth.

A bewildering array of estimates for mortality from the plague of 1347—53 is the result. The first outbreak of the Black Death indisputably was the deadliest but the death rate varied widely according to place and social stratum. National estimates of mortality for England, where the evidence is fullest, range from five percent, to 23.6 percent among aristocrats holding land from the king, to forty to forty—five percent of the kingdom’s clergy, to over sixty percent in a recent estimate. The picture for the continent likewise is varied. Regional mortality in Languedoc (France) was forty to fifty percent while sixty to eighty percent of Tuscans (Italy) perished. Urban death rates were mostly higher but no less disparate, e.g., half in Orvieto (Italy), Siena (Italy), and Volterra (Italy), fifty to sixty—six percent in Hamburg (Germany), fifty—eight to sixty—eight percent in Perpignan (France), sixty percent for Barcelona’s (Spain) clerical population, and seventy percent in Bremen (Germany). The Black Death was often highly arbitrary in how it killed in a narrow locale, which no doubt broadened the spectrum of mortality rates. Two of Durham Cathedral Priory’s manors, for instance, had respective death rates of twenty—one and seventy—eighty percent (Shrewsbury, 1970; Russell, 1948; Waugh, 1991; Ziegler, 1969; Benedictow, 2004; Le Roy Ladurie, 1976; Bowsky, 1964; Pounds, 1974; Emery, 1967; Gyug, 1983; Aberth, 1995; Lomas, 1989).

Credible death rates between one quarter and three quarters complicate reaching a Europe—wide figure. Neither a casual and unscientific averaging of available estimates to arrive at a probably misleading composite death rate nor a timid placing of mortality somewhere between one and two thirds is especially illuminating. Scholars confronting the problem’s complexity before venturing estimates once favored one third as a reasonable aggregate death rate. Since the early 1970s demographers have found higher levels of mortality plausible and European mortality of one half is considered defensible, a figure not too distant from less fanciful contemporary observations.

While the Black Death of 1347—53 inflicted demographic carnage, had it been an isolated event European population might have recovered to its former level in a generation or two and its economic impact would have been moderate. The disease’s long—term demographic and socioeconomic legacy arose from it recurrence. When both national and local epidemics are taken into account, England endured thirty plague years between 1351 and 1485, a pattern mirrored on the continent, where Perugia was struck nineteen times and Hamburg, Cologne, and Nuremburg at least ten times each in the fifteenth century. Deadliness of outbreaks declined — perhaps ten to twenty percent in the second plague (pestis secunda) of 1361—2, ten to fifteen percent in the third plague (pestis tertia) of 1369, and as low as five and rarely above ten percent thereafter — and became more localized; however, the Black Death’s persistence ensured that demographic recovery would be slow and socioeconomic consequences deeper. Europe’s population in 1430 may have been fifty to seventy—five percent lower than in 1290 (Cipolla, 1994; Gottfried, 1983).

Enumeration of corpses does not adequately reflect the Black Death’s demographic impact. Who perished was equally significant as how many; in other words, the structure of mortality influenced the time and rate of demographic recovery. The plague’s preference for urbanite over peasant, man over woman, poor over affluent, and, perhaps most significantly, young over mature shaped its demographic toll. Eyewitnesses so universally reported disproportionate death among the young in the plague’s initial recurrence (1361—2) that it became known as the Childen’s Plague (pestis puerorum, mortalité des enfants). If this preference for youth reflected natural resistance to the disease among plague survivors, the Black Death may have ultimately resembled a lower—mortality childhood disease, a reality that magnified both its demographic and psychological impact.

The Black Death pushed Europe into a long—term demographic trough. Notwithstanding anecdotal reports of nearly universal pregnancy of women in the wake of the magna pestilencia, demographic stagnancy characterized the rest of the Middle Ages. Population growth recommenced at different times in different places but rarely earlier than the second half of the fifteenth century and in many places not until c. 1550.

The European Economy on the Cusp of the Black Death

Like the plague’s death toll, its socioeconomic impact resists categorical measurement. The Black Death’s timing made a facile labeling of it as a watershed in European economic history nearly inevitable. It arrived near the close of an ebullient high Middle Ages (c. 1000 to c. 1300) in which urban life reemerged, long—distance commerce revived, business and manufacturing innovated, manorial agriculture matured, and population burgeoned, doubling or tripling. The Black Death simultaneously portended an economically stagnant, depressed late Middle Ages (c. 1300 to c. 1500). However, even if this simplistic and somewhat misleading portrait of the medieval economy is accepted, isolating the Black Death’s economic impact from manifold factors at play is a daunting challenge.

Cognizant of a qualitative difference between the high and late Middle Ages, students of medieval economy have offered varied explanations, some mutually exclusive, others not, some favoring the less dramatic, less visible, yet inexorable factor as an agent of change rather than a catastrophic demographic shift. For some, a cooling climate undercut agricultural productivity, a downturn that rippled throughout the predominantly agrarian economy. For others, exploitative political, social, and economic institutions enriched an idle elite and deprived working society of wherewithal and incentive to be innovative and productive. Yet others associate monetary factors with the fourteenth— and fifteenth—century economic doldrums.

The particular concerns of the twentieth century unsurprisingly induced some scholars to view the medieval economy through a Malthusian lens. In this reconstruction of the Middle Ages, population growth pressed against the society’s ability to feed itself by the mid—thirteenth century. Rising impoverishment and contracting holdings compelled the peasant to cultivate inferior, low—fertility land and to convert pasture to arable production and thereby inevitably reduce numbers of livestock and make manure for fertilizer scarcer. Boosting gross productivity in the immediate term yet driving yields of grain downward in the longer term exacerbated the imbalance between population and food supply; redressing the imbalance became inevitable. This idea’s adherents see signs of demographic correction from the mid—thirteenth century onward, possibly arising in part from marriage practices that reduced fertility. A more potent correction came with subsistence crises. Miserable weather in 1315 destroyed crops and the ensuing Great Famine (1315—22) reduced northern Europe’s population by perhaps ten to fifteen percent. Poor harvests, moreover, bedeviled England and Italy to the eve of the Black Death.

These factors — climate, imperfect institutions, monetary imbalances, overpopulation — diminish the Black Death’s role as a transformative socioeconomic event. In other words, socioeconomic changes already driven by other causes would have occurred anyway, merely more slowly, had the plague never struck Europe. This conviction fosters receptiveness to lower estimates of the Black Death’s deadliness. Recent scrutiny of the Malthusian analysis, especially studies of agriculture in source—rich eastern England, has, however, rehabilitated the Black Death as an agent of socioeconomic change. Growing awareness of the use of “progressive” agricultural techniques and of alternative, non—grain economies less susceptible to a Malthusian population—versus—resources dynamic has undercut the notion of an absolutely overpopulated Europe and has encouraged acceptance of higher rates of mortality from the plague (Campbell, 1983; Bailey, 1989).

The Black Death and the Agrarian Economy

The lion’s share of the Black Death’s effect was felt in the economy’s agricultural sector, unsurprising in a society in which, except in the most urbanized regions, nine of ten people eked out a living from the soil.

A village struck by the plague underwent a profound though brief disordering of the rhythm of daily life. Strong administrative and social structures, the power of custom, and innate human resiliency restored the village’s routine by the following year in most cases: fields were plowed, crops were sown, tended, and harvested, labor services were performed by the peasantry, the village’s lord collected dues from tenants. Behind this seeming normalcy, however, lord and peasant were adjusting to the Black Death’s principal economic consequence: a much smaller agricultural labor pool. Before the plague, rising population had kept wages low and rents and prices high, an economic reality advantageous to the lord in dealing with the peasant and inclining many a peasant to cleave to demeaning yet secure dependent tenure.

As the Black Death swung the balance in the peasant’s favor, the literate elite bemoaned a disintegrating social and economic order. William of Dene, John Langland, John Gower, and others polemically evoked nostalgia for the peasant who knew his place, worked hard, demanded little, and squelched pride while they condemned their present in which land lay unplowed and only an immediate pang of hunger goaded a lazy, disrespectful, grasping peasant to do a moment’s desultory work (Hatcher, 1994).

Moralizing exaggeration aside, the rural worker indeed demanded and received higher payments in cash (nominal wages) in the plague’s aftermath. Wages in England rose from twelve to twenty—eight percent from the 1340s to the 1350s and twenty to forty percent from the 1340s to the 1360s. Immediate hikes were sometimes more drastic. During the plague year (1348—49) at Fornham All Saints (Suffolk), the lord paid the pre—plague rate of 3d. per acre for more half of the hired reaping but the rest cost 5d., an increase of 67 percent. The reaper, moreover, enjoyed more and larger tips in cash and perquisites in kind to supplement the wage. At Cuxham (Oxfordshire), a plowman making 2s. weekly before the plague demanded 3s. in 1349 and 10s. in 1350 (Farmer, 1988; Farmer, 1991; West Suffolk Record Office 3/15.7/2.4; Harvey, 1965).

In some instances, the initial hikes in nominal or cash wages subsided in the years further out from the plague and any benefit they conferred on the wage laborer was for a time undercut by another economic change fostered by the plague. Grave mortality ensured that the European supply of currency in gold and silver increased on a per—capita basis, which in turned unleashed substantial inflation in prices that did not subside in England until the mid—1370s and even later in many places on the continent. The inflation reduced the purchasing power (real wage) of the wage laborer so significantly that, even with higher cash wages, his earnings either bought him no more or often substantially less than before the magna pestilencia (Munro, 2003; Aberth, 2001).

The lord, however, was confronted not only by the roving wage laborer on whom he relied for occasional and labor—intensive seasonal tasks but also by the peasant bound to the soil who exchanged customary labor services, rent, and dues for holding land from the lord. A pool of labor services greatly reduced by the Black Death enabled the servile peasant to bargain for less onerous responsibilities and better conditions. At Tivetshall (Norfolk), vacant holdings deprived its lord of sixty percent of his week—work and all his winnowing services by 1350—51. A fifth of winter and summer week—work and a third of reaping services vanished at Redgrave (Suffolk) in 1349—50 due to the magna pestilencia. If a lord did not make concessions, a peasant often gravitated toward any better circumstance beckoning elsewhere. At Redgrave, for instance, the loss of services in 1349—50 directly due to the plague was followed in 1350—51 by an equally damaging wave of holdings abandoned by surviving tenants. For the medieval peasant, never so tightly bound to the manor as once imagined, the Black Death nonetheless fostered far greater rural mobility. Beyond loss of labor services, the deceased or absentee peasant paid no rent or dues and rendered no fees for use of manorial monopolies such as mills and ovens and the lord’s revenues shrank. The income of English lords contracted by twenty percent from 1347 to 1353 (Norfolk Record Office WAL 1247/288×1; University of Chicago Bacon 335—6; Gottfried, 1983).

Faced with these disorienting circumstances, the lord often ultimately had to decide how or even whether the pre—plague status quo could be reestablished on his estate. Not capitalistic in the sense of maximizing productivity for reinvestment of profits to enjoy yet more lucrative future returns, the medieval lord nonetheless valued stable income sufficient for aristocratic ostentation and consumption. A recalcitrant peasantry, diminished dues and services, and climbing wages undermined the material foundation of the noble lifestyle, jostled the aristocratic sense of proper social hierarchy, and invited a response.

In exceptional circumstances, a lord sometimes kept the peasant bound to the land. Because the nobility in Spanish Catalonia had already tightened control of the peasantry before the Black Death, because underdeveloped commercial agriculture provided the peasantry narrow options, and because the labor—intensive demesne agriculture common elsewhere was largely absent, the Catalan lord through a mix of coercion (physical intimidation, exorbitant fees to purchase freedom) and concession (reduced rents, conversion of servile dues to less humiliating fixed cash payments) kept the Catalan peasant in place. In England and elsewhere on the continent, where labor services were needed to till the demesne, such a conservative approach was less feasible. This, however, did not deter some lords from trying. The lord of Halesowen (Worcestershire) not only commanded the servile tenant to perform the full range of services but also resuscitated labor obligations in abeyance long before the Black Death, tantamount to an unwillingness to acknowledge anything had changed (Freedman, 1991; Razi, 1981).

Europe’s political elite also looked to legal coercion not only to contain rising wages and to limit the peasant’s mobility but also to allay a sense of disquietude and disorientation arising from the Black Death’s buffeting of pre—plague social realities. England’s Ordinance of Laborers (1349) and Statute of Laborers (1351) called for a return to the wages and terms of employment of 1346. Labor legislation was likewise promulgated by the Córtes of Aragon and Castile, the French crown, and cities such as Siena, Orvieto, Pisa, Florence, and Ragusa. The futility of capping wages by legislative fiat is evident in the French crown’s 1351 revision of its 1349 enactment to permit a wage increase of one third. Perhaps only in England, where effective government permitted robust enforcement, did the law slow wage increases for a time (Aberth, 2001; Gottfried, 1983; Hunt and Murray, 1999; Cohn, 2007).

Once knee—jerk conservatism and legislative palliatives failed to revivify pre—plague socioeconomic arrangements, the lord cast about for a modus vivendi in a new world of abundant land and scarce labor. A sober triage of the available sources of labor, whether it was casual wage labor or a manor’s permanent stipendiary staff (famuli) or the dependent peasant, led to revision of managerial policy. The abbot of Saint Edmund’s, for example, focused on reconstitution of the permanent staff (famuli) on his manors. Despite mortality and flight, the abbot by and large achieved his goal by the mid—1350s. While labor legislation may have facilitated this, the abbot’s provision of more frequent and lucrative seasonal rewards, coupled with the payment of grain stipends in more valuable and marketable cereals such as wheat, no doubt helped secure the loyalty of famuli while circumventing statutory limits on higher wages. With this core of labor solidified, the focus turned to preserving the most essential labor services, especially those associated with the labor—intensive harvesting season. Less vital labor services were commuted for cash payments and ad hoc wage labor then hired to fill gaps. The cultivation of the demesne continued, though not on the pre—plague scale.

For a time in fact circumstances helped the lord continue direct management of the demesne. The general inflation of the quarter—century following the plague as well as poor harvests in the 1350s and 1360s boosted grain prices and partially compensated for more expensive labor. This so—called “Indian summer” of demesne agriculture ended quickly in the mid—1370s in England and subsequently on the continent when the post—plague inflation gave way to deflation and abundant harvests drove prices for commodities downward, where they remained, aside from brief intervals of inflation, for the rest of the Middle Ages. Recurrences of the plague, moreover, placed further stress on new managerial policies. For the lord who successfully persuaded new tenants to take over vacant holdings, such as happened at Chevington (Suffolk) by the late 1350s, the pestis secunda of 1361—62 often inflicted a decisive blow: a second recovery at Chevington never materialized (West Suffolk Records Office 3/15.3/2.9—2.23).

Under unremitting pressure, the traditional cultivation of the demesne ceased to be viable for lord after lord: a centuries—old manorial system gradually unraveled and the nature of agriculture was transformed. The lord’s earliest concession to this new reality was curtailment of cultivated acreage, a trend that accelerated with time. The 590.5 acres sown on average at Great Saxham (Suffolk) in the late 1330s was more than halved (288.67 acres) in the 1360s, for instance (West Suffolk Record Office, 3/15.14/1.1, 1.7, 1.8).

Beyond reducing the demesne to a size commensurate with available labor, the lord could explore types of husbandry less labor—intensive than traditional grain agriculture. Greater domestic manufacture of woolen cloth and growing demand for meat enabled many English lords to reduce arable production in favor of sheep—raising, which required far less labor. Livestock husbandry likewise became more significant on the continent. Suitable climate, soil, and markets made grapes, olives, apples, pears, vegetables, hops, hemp, flax, silk, and dye—stuffs attractive alternatives to grain. In hope of selling these cash crops, rural agriculture became more attuned to urban demand and urban businessmen and investors more intimately involved in what and how much of it was grown in the countryside (Gottfried, 1983; Hunt and Murray, 1999).

The lord also looked to reduce losses from demesne acreage no longer under the plow and from the vacant holdings of onetime tenants. Measures adopted to achieve this end initiated a process that gained momentum with each passing year until the face of the countryside was transformed and manorialism was dead. The English landlord, hopeful for a return to the pre—plague regime, initially granted brief terminal leases of four to six years at fixed rates for bits of demesne and for vacant dependent holdings. Leases over time lengthened to ten, twenty, thirty years, or even a lifetime. In France and Italy, the lord often resorted to métayage or mezzadria leasing, a type of sharecropping in which the lord contributed capital (land, seed, tools, plow teams) to the lessee, who did the work and surrendered a fraction of the harvest to the lord.

Disillusioned by growing obstacles to profitable cultivation of the demesne, the lord, especially in the late fourteenth century and the early fifteenth, adopted a more sweeping type of leasing, the placing of the demesne or even the entire manor “at farm” (ad firmam). A “farmer” (firmarius) paid the lord a fixed annual “farm” (firma) for the right to exploit the lord’s property and take whatever profit he could. The distant or unprofitable manor was usually “farmed” first and other manors followed until a lord’s personal management of his property often ceased entirely. The rising popularity of this expedient made direct management of demesne by lord rare by c. 1425. The lord often became a rentier bound to a fixed income. The tenurial transformation was completed when the lord sold to the peasant his right of lordship, a surrender to the peasant of outright possession of his holding for a fixed cash rent and freedom from dues and services. Manorialism, in effect, collapsed and was gone from western and central Europe by 1500.

The landlord’s discomfort ultimately benefited the peasantry. Lower prices for foodstuffs and greater purchasing power from the last quarter of the fourteenth century onward, progressive disintegration of demesnes, and waning customary land tenure enabled the enterprising, ambitious peasant to lease or purchase property and become a substantial landed proprietor. The average size of the peasant holding grew in the late Middle Ages. Due to the peasant’s generally improved standard of living, the century and a half following the magna pestilencia has been labeled a “golden age” in which the most successful peasant became a “yeoman” or “kulak” within the village community. Freed from labor service, holding a fixed copyhold lease, and enjoying greater disposable income, the peasant exploited his land exclusively for his personal benefit and often pursued leisure and some of the finer things in life. Consumption of meat by England’s humbler social strata rose substantially after the Black Death, a shift in consumer tastes that reduced demand for grain and helped make viable the shift toward pastoralism in the countryside. Late medieval sumptuary legislation, intended to keep the humble from dressing above his station and retain the distinction between low— and highborn, attests both to the peasant’s greater income and the desire of the elite to limit disorienting social change (Dyer, 1989; Gottfried, 1983; Hunt and Murray, 1999).

The Black Death, moreover, profoundly altered the contours of settlement in the countryside. Catastrophic loss of population led to abandonment of less attractive fields, contraction of existing settlements, and even wholesale desertion of villages. More than 1300 English villages vanished between 1350 and 1500. French and Dutch villagers abandoned isolated farmsteads and huddled in smaller villages while their Italian counterparts vacated remote settlements and shunned less desirable fields. The German countryside was mottled with abandoned settlements. Two thirds of named villages disappeared in Thuringia, Anhalt, and the eastern Harz mountains, one fifth in southwestern Germany, and one third in the Rhenish palatinate, abandonment far exceeding loss of population and possibly arising from migration from smaller to larger villages (Gottfried, 1983; Pounds, 1974).

The Black Death and the Commercial Economy

As with agriculture, assessment of the Black Death’s impact on the economy’s commercial sector is a complex problem. The vibrancy of the high medieval economy is generally conceded. As the first millennium gave way to the second, urban life revived, trade and manufacturing flourished, merchant and craft gilds emerged, commercial and financial innovations proliferated (e.g., partnerships, maritime insurance, double—entry bookkeeping, fair letters, letters of credit, bills of exchange, loan contracts, merchant banking, etc.). The integration of the high medieval economy reached its zenith c. 1250 to c. 1325 with the rise of large companies with international interests, such as the Bonsignori of Siena and the Buonaccorsi of Florence and the emergence of so—called “super companies” such as the Florentine Bardi, Peruzzi, and Acciaiuoli (Hunt and Murray, 1999).

How to characterize the late medieval economy has been more fraught with controversy, however. Historians a century past, uncomprehending of how their modern world could be rooted in a retrograde economy, imagined an entrepreneurially creative and expansive late medieval economy. Succeeding generations of historians darkened this optimistic portrait and fashioned a late Middle Ages of unmitigated decline, an “age of adversity” in which the economy was placed under the rubric “depression of the late Middle Ages.” The historiographical pendulum now swings away from this interpretation and a more nuanced picture has emerged that gives the Black Death’s impact on commerce its full due but emphasizes the variety of the plague’s impact from merchant to merchant, industry to industry, and city to city. Success or failure was equally possible after the Black Death and the game favored adaptability, creativity, nimbleness, opportunism, and foresight.

Once the magna pestilencia had passed, the city had to cope with a labor supply even more greatly decimated than in the countryside due to a generally higher urban death rate. The city, however, could reverse some of this damage by attracting, as it had for centuries, new workers from the countryside, a phenomenon that deepened the crisis for the manorial lord and contributed to changes in rural settlement. A resurgence of the slave trade occurred in the Mediterranean, especially in Italy, where the female slave from Asia or Africa entered domestic service in the city and the male slave toiled in the countryside. Finding more labor was not, however, a panacea. A peasant or slave performed an unskilled task adequately but could not necessarily replace a skilled laborer. The gross loss of talent due to the plague caused a decline in per capita productivity by skilled labor remediable only by time and training (Hunt and Murray, 1999; Miskimin, 1975).

Another immediate consequence of the Black Death was dislocation of the demand for goods. A suddenly and sharply smaller population ensured a glut of manufactured and trade goods, whose prices plummeted for a time. The businessman who successfully weathered this short—term imbalance in supply and demand then had to reshape his business’ output to fit a declining or at best stagnant pool of potential customers.

The Black Death transformed the structure of demand as well. While the standard of living of the peasant improved, chronically low prices for grain and other agricultural products from the late fourteenth century may have deprived the peasant of the additional income to purchase enough manufactured or trade items to fill the hole in commercial demand. In the city, however, the plague concentrated wealth, often substantial family fortunes, in fewer and often younger hands, a circumstance that, when coupled with lower prices for grain, left greater per capita disposable income. The plague’s psychological impact, moreover, it is believed, influenced how this windfall was used. Pessimism and the specter of death spurred an individualistic pursuit of pleasure, a hedonism that manifested itself in the purchase of luxuries, especially in Italy. Even with a reduced population, the gross volume of luxury goods manufactured and sold rose, a pattern of consumption that endured even after the extra income had been spent within a generation or so after the magna pestilencia.

Like the manorial lord, the affluent urban bourgeois sometimes employed structural impediments to block the ambitious parvenu from joining his ranks and becoming a competitor. A tendency toward limiting the status of gild master to the son or son—in—law of a sitting master, evident in the first half of the fourteenth century, gained further impetus after the Black Death. The need for more journeymen after the plague was conceded in the shortening of terms of apprenticeship, but the newly minted journeyman often discovered that his chance of breaking through the glass ceiling and becoming a master was virtually nil without an entrée through kinship. Women also were banished from gilds as unwanted competition. The urban wage laborer, by and large controlled by the gilds, was denied membership and had no access to urban structures of power, a potent source of frustration. While these measures may have permitted the bourgeois to hold his ground for a time, the winds of change were blowing in the city as well as the countryside and gild monopolies and gild restrictions were fraying by the close of the Middle Ages.

In the new climate created by the Black Death, the individual businessman did retain an advantage: the business judgment and techniques honed during the high Middle Ages. This was crucial in a contracting economy in which gross productivity never attained its high medieval peak and in which the prevailing pattern was boom and bust on a roughly generational basis. A fluctuating economy demanded adaptability and the most successful post—plague businessman not merely weathered bad times but located opportunities within adversity and exploited them. The post—plague entrepreneur’s preference for short—term rather than long—term ventures, once believed a product of a gloomy despondency caused by the plague and exacerbated by endemic violence, decay of traditional institutions, and nearly continuous warfare, is now viewed as a judicious desire to leave open entrepreneurial options, to manage risk effectively, and to take advantage of whatever better opportunity arose. The successful post—plague businessman observed markets closely and responded to them while exercising strict control over his concern, looking for greater efficiency, and trimming costs (Hunt and Murray, 1999).

The fortunes of the textile industry, a trade singularly susceptible to contracting markets and rising wages, best underscores the importance of flexibility. Competition among textile manufacturers, already great even before the Black Death due to excess productive capacity, was magnified when England entered the market for low— and medium—quality woolen cloth after the magna pestilencia and was exporting forty—thousand pieces annually by 1400. The English took advantage of proximity to raw material, wool England itself produced, a pattern increasingly common in late medieval business. When English producers were undeterred by a Flemish embargo on English cloth, the Flemish and Italians, the textile trade’s other principal players, were compelled to adapt in order to compete. Flemish producers that emphasized higher—grade, luxury textiles or that purchased, improved, and resold cheaper English cloth prospered while those that stubbornly competed head—to—head with the English in lower—quality woolens suffered. The Italians not only produced luxury woolens, improved their domestically—produced wool, found sources for wool outside England (Spain), and increased production of linen but also produced silks and cottons, once only imported into Europe from the East (Hunt and Murray, 1999).

The new mentality of the successful post—plague businessman is exemplified by the Florentines Gregorio Dati and Buonaccorso Pitti and especially the celebrated merchant of Prato, Francesco di Marco Datini. The large companies and super companies, some of which failed even before the Black Death, were not well suited to the post—plague commercial economy. Datini’s family business, with its limited geographical ambitions, better exercised control, was more nimble and flexible as opportunities vanished or materialized, and more effectively managed risk, all keys to success. Datini through voluminous correspondence with his business associates, subordinates, and agents and his conspicuously careful and regular accounting grasped the reins of his concern tightly. He insulated himself from undue risk by never committing too heavily to any individual venture, by dividing cargoes among ships or by insuring them, by never lending money to notoriously uncreditworthy princes, and by remaining as apolitical as he could. His energy and drive to complete every business venture likewise served him well and made him an exemplar for commercial success in a challenging era (Origo, 1957; Hunt and Murray, 1999).

The Black Death and Popular Rebellion

The late medieval popular uprising, a phenomenon with undeniable economic ramifications, is often linked with the demographic, cultural, social, and economic reshuffling caused by the Black Death; however, the connection between pestilence and revolt is neither exclusive nor linear. Any single uprising is rarely susceptible to a single—cause analysis and just as rarely was a single socioeconomic interest group the fomenter of disorder. The outbreak of rebellion in the first half of the fourteenth century (e.g., in urban [1302] and maritime [1325—28] Flanders and in English monastic towns [1326—27]) indicates the existence of socioeconomic and political disgruntlement well before the Black Death.

Some explanations for popular uprising, such as the placing of immediate stresses on the populace and the cumulative effect of centuries of oppression by manorial lords, are now largely dismissed. At times of greatest stress —— the Great Famine and the Black Death —— disorder but no large—scale, organized uprising materialized. Manorial oppression likewise is difficult to defend when the peasant in the plague’s aftermath was often enjoying better pay, reduced dues and services, broader opportunities, and a higher standard of living. Detailed study of the participants in the revolts most often labeled “peasant” uprisings has revealed the central involvement and apparent common cause of urban and rural tradesmen and craftsmen, not only manorial serfs.

The Black Death may indeed have made its greatest contribution to popular rebellion by expanding the peasant’s horizons and fueling a sense of grievance at the pace of change, not at its absence. The plague may also have undercut adherence to the notion of a divinely—sanctioned, static social order and buffeted a belief that preservation of manorial socioeconomic arrangements was essential to the survival of all, which in turn may have raised receptiveness to the apocalyptic socially revolutionary message of preachers like England’s John Ball. After the Black Death, change was inevitable and apparent to all.

The reasons for any individual rebellion were complex. Measures in the environs of Paris to check wage hikes caused by the plague doubtless fanned discontent and contributed to the outbreak of the Jacquerie of 1358 but high taxation to finance the Hundred Years’ War, depredation by marauding mercenary bands in the French countryside, and the peasantry’s conviction that the nobility had failed them in war roiled popular discontent. In the related urban revolt led by étienne Marcel (1355—58), tensions arose from the Parisian bourgeoisie’s discontent with the war’s progress, the crown’s imposition of regressive sales and head taxes, and devaluation of currency rather than change attributable to the Black Death.

In the English Peasants’ Rebellion of 1381, continued enforcement of the Statute of Laborers no doubt rankled and perhaps made the peasantry more open to provocative sermonizing but labor legislation had not halted higher wages or improvement in the standard of living for peasant. It seems likely that discontent may have arisen from an unsatisfying pace of improvement of the peasant’s lot. The regressive Poll Taxes of 1380 and 1381 also contributed to the discontent. It is furthermore noteworthy that the rebellion began in relatively affluent eastern England, not in the poorer west or north.

In the Ciompi revolt in Florence (1378—83), restrictive gild regulations and denial of political voice to workers due to the Black Death raised tensions; however, Florence’s war with the papacy and an economic slump in the 1370s resulting in devaluation of the penny in which the worker was paid were equally if not more important in fomenting unrest. Once the value of the penny was restored to its former level in 1383 the rebellion in fact subsided.

In sum, the Black Death played some role in each uprising but, as with many medieval phenomena, it is difficult to gauge its importance relative to other causes. Perhaps the plague’s greatest contribution to unrest lay in its fostering of a shrinking economy that for a time was less able to absorb socioeconomic tensions than had the growing high medieval economy. The rebellions in any event achieved little. Promises made to the rebels were invariably broken and brutal reprisals often followed. The lot of the lower socioeconomic strata was improved incrementally by the larger economic changes already at work. Viewed from this perspective, the Black Death may have had more influence in resolving the worker’s grievances than in spurring revolt.

Conclusion

The European economy at the close of the Middle Ages (c. 1500) differed fundamentally from the pre—plague economy. In the countryside, a freer peasant derived greater material benefit from his toil. Fixed rents if not outright ownership of land had largely displaced customary dues and services and, despite low grain prices, the peasant more readily fed himself and his family from his own land and produced a surplus for the market. Yields improved as reduced population permitted a greater focus on fertile lands and more frequent fallowing, a beneficial phenomenon for the peasant. More pronounced socioeconomic gradations developed among peasants as some, especially more prosperous ones, exploited the changed circumstances, especially the availability of land. The peasant’s gain was the lord’s loss. As the Middle Ages waned, the lord was commonly a pure rentier whose income was subject to the depredations of inflation.

In trade and manufacturing, the relative ease of success during the high Middle Ages gave way to greater competition, which rewarded better business practices and leaner, meaner, and more efficient concerns. Greater sensitivity to the market and the cutting of costs ultimately rewarded the European consumer with a wider range of good at better prices.

In the long term, the demographic restructuring caused by the Black Death perhaps fostered the possibility of new economic growth. The pestilence returned Europe’s population roughly its level c. 1100. As one scholar notes, the Black Death, unlike other catastrophes, destroyed people but not property and the attenuated population was left with the whole of Europe’s resources to exploit, resources far more substantial by 1347 than they had been two and a half centuries earlier, when they had been created from the ground up. In this environment, survivors also benefited from the technological and commercial skills developed during the course of the high Middle Ages. Viewed from another perspective, the Black Death was a cataclysmic event and retrenchment was inevitable, but it ultimately diminished economic impediments and opened new opportunity.

References and Further Reading:

Aberth, John. “The Black Death in the Diocese of Ely: The Evidence of the Bishop’s Register.” Journal of Medieval History 21 (1995): 275—87.

Aberth, John. From the Brink of the Apocalypse: Confronting Famine, War, Plague, and Death in the Later Middle Ages. New York: Routledge, 2001.

Aberth, John. The Black Death: The Great Mortality of 1348—1350, a Brief History with Documents . Boston and New York: Bedford/St. Martin’s, 2005.

Aston, T. H. and C. H. E. Philpin, eds. The Brenner Debate: Agrarian Class Structure and Economic Development in Pre—Industrial Europe. Cambridge: Cambridge University Press, 1985.

Bailey, Mark D. “Demographic Decline in Late Medieval England: Some Thoughts on Recent Research.” Economic History Review 49 (1996): 1—19.

Bailey, Mark D. A Marginal Economy? East Anglian Breckland in the Later Middle Ages. Cambridge: Cambridge University Press, 1989.

Benedictow, Ole J. The Black Death, 1346—1353: The Complete History. Woodbridge, Suffolk: Boydell Press, 2004.

Bleukx, Koenraad. “Was the Black Death (1348—49) a Real Plague Epidemic? England as a Case Study.” In Serta Devota in Memoriam Guillelmi Lourdaux. Pars Posterior: Cultura Medievalis, edited by W. Verbeke, M. Haverals, R. de Keyser, and J. Goossens, 64—113. Leuven: Leuven University Press, 1995.

Blockmans, Willem P. “The Social and Economic Effects of Plague in the Low Countries, 1349—1500.” Revue Belge de Philologie et d’Histoire 58 (1980): 833—63.

Bolton, Jim L. “‘The World Upside Down': Plague as an Agent of Economic and Social Change.” In The Black Death in England, edited by M. Ormrod and P. Lindley. Stamford: Paul Watkins, 1996.

Bowsky, William M. “The Impact of the Black Death upon Sienese Government and Society.” Speculum 38 (1964): 1—34.

Campbell, Bruce M. S. “Agricultural Progress in Medieval England: Some Evidence from Eastern Norfolk.” Economic History Review 36 (1983): 26—46.

Campbell, Bruce M. S., ed. Before the Black Death: Studies in the ‘Crisis’ of the Early Fourteenth Century. Manchester: Manchester University Press, 1991.

Cipolla, Carlo M. Before the Industrial Revolution: European Society and Economy, 1000—1700, Third edition. New York: Norton, 1994.

Cohn, Samuel K. The Black Death Transformed: Disease and Culture in Early Renaissance Europe. London: Edward Arnold, 2002.

Cohn, Sameul K. “After the Black Death: Labour Legislation and Attitudes toward Labour in Late—Medieval Western Europe.” Economic History Review 60 (2007): 457—85.

Davis, David E. “The Scarcity of Rats and the Black Death.” Journal of Interdisciplinary History 16 (1986): 455—70.

Davis, R. A. “The Effect of the Black Death on the Parish Priests of the Medieval Diocese of Coventry and Lichfield.” Bulletin of the Institute of Historical Research 62 (1989): 85—90.

Drancourt, Michel, Gerard Aboudharam, Michel Signoli, Olivier Detour, and Didier Raoult. “Detection of 400—Year—Old Yersinia Pestis DNA in Human Dental Pulp: An Approach to the Diagnosis of Ancient Septicemia.” Proceedings of the National Academy of the United States 95 (1998): 12637—40.

Dyer, Christopher. Standards of Living in the Middle Ages: Social Change in England, c. 1200—1520. Cambridge: Cambridge University Press, 1989.

Emery, Richard W. “The Black Death of 1348 in Perpignan.” Speculum 42 (1967): 611—23.

Farmer, David L. “Prices and Wages.” In The Agrarian History of England and Wales, Vol. II, edited H. E. Hallam, 715—817. Cambridge: Cambridge University Press, 1988.

Farmer, D. L. “Prices and Wages, 1350—1500.” In The Agrarian History of England and Wales, Vol. III, edited E. Miller, 431—94. Cambridge: Cambridge University Press, 1991.

Flinn, Michael W. “Plague in Europe and the Mediterranean Countries.” Journal of European Economic History 8 (1979): 131—48.

Freedman, Paul. The Origins of Peasant Servitude in Medieval Catalonia. New York: Cambridge University Press, 1991.

Gottfried, Robert. The Black Death: Natural and Human Disaster in Medieval Europe. New York: Free Press, 1983.

Gyug, Richard. “The Effects and Extent of the Black Death of 1348: New Evidence for Clerical Mortality in Barcelona.” Mediæval Studies 45 (1983): 385—98.

Harvey, Barbara F. “The Population Trend in England between 1300 and 1348.” Transactions of the Royal Historical Society 4th ser. 16 (1966): 23—42.

Harvey, P. D. A. A Medieval Oxfordshire Village: Cuxham, 1240—1400. London: Oxford University Press, 1965.

Hatcher, John. “England in the Aftermath of the Black Death.” Past and Present 144 (1994): 3—35.

Hatcher, John and Mark Bailey. Modelling the Middle Ages: The History and Theory of England’s Economic Development. Oxford: Oxford University Press, 2001.

Hatcher, John. Plague, Population, and the English Economy 1348—1530. London and Basingstoke: MacMillan Press Ltd., 1977.

Herlihy, David. The Black Death and the Transformation of the West, edited by S. K. Cohn. Cambridge and London: Cambridge University Press, 1997.

Horrox, Rosemary, transl. and ed. The Black Death. Manchester: Manchester University Press, 1994.

Hunt, Edwin S.and James M. Murray. A History of Business in Medieval Europe, 1200—1550. Cambridge: Cambridge University Press, 1999.

Jordan, William C. The Great Famine: Northern Europe in the Early Fourteenth Century. Princeton: Princeton University Press, 1996.

Lehfeldt, Elizabeth, ed. The Black Death. Boston: Houghton and Mifflin, 2005.

Lerner, Robert E. The Age of Adversity: The Fourteenth Century. Ithaca: Cornell University Press, 1968.

Le Roy Ladurie, Emmanuel. The Peasants of Languedoc, transl. J. Day. Urbana: University of Illinois Press, 1976.

Lomas, Richard A. “The Black Death in County Durham.” Journal of Medieval History 15 (1989): 127—40.

McNeill, William H. Plagues and Peoples. Garden City, New York: Anchor Books, 1976.

Miskimin, Harry A. The Economy of the Early Renaissance, 1300—1460. Cambridge: Cambridge University Press, 1975.

Morris, Christopher “The Plague in Britain.” Historical Journal 14 (1971): 205—15.

Munro, John H. “The Symbiosis of Towns and Textiles: Urban Institutions and the Changing Fortunes of Cloth Manufacturing in the Low Countries and England, 1270—1570.” Journal of Early Modern History 3 (1999): 1—74.

Munro, John H. “Wage—Stickiness, Monetary Changes, and the Real Incomes in Late—Medieval England and the Low Countries, 1300—1500: Did Money Matter?” Research in Economic History 21 (2003): 185—297.

Origo. Iris The Merchant of Prato: Francesco di Marco Datini, 1335—1410. Boston: David R. Godine, 1957, 1986.

Platt, Colin. King Death: The Black Death and its Aftermath in Late—Medieval England. Toronto: University of Toronto Press, 1996.

Poos, Lawrence R. A Rural Society after the Black Death: Essex 1350—1575. Cambridge: Cambridge University Press, 1991.

Postan, Michael M. The Medieval Economy and Society: An Economic History of Britain in the Middle Ages. Harmondswworth, Middlesex: Penguin, 1975.

Pounds, Norman J. D. An Economic History of Europe. London: Longman, 1974.

Raoult, Didier, Gerard Aboudharam, Eric Crubézy, Georges Larrouy, Bertrand Ludes, and Michel Drancourt. “Molecular Identification by ‘Suicide PCR’ of Yersinia Pestis as the Agent of Medieval Black Death.” Proceedings of the National Academy of Sciences of the United States of America 97 (7 Nov. 2000): 12800—3.

Razi, Zvi “Family, Land, and the Village Community in Later Medieval England.” Past and Present 93 (1981): 3—36.

Russell, Josiah C. British Medieval Population. Albuquerque: University of New Mexico Press, 1948.

Scott, Susan and Christopher J. Duncan. Return of the Black Death: The World’s Deadliest Serial Killer. Chicester, West Sussex and Hoboken, NJ: Wiley, 2004.

Shrewsbury, John F. D. A History of Bubonic Plague in the British Isles. Cambridge: Cambridge University Press, 1970.

Twigg, Graham The Black Death: A Biological Reappraisal. London: Batsford Academic and Educational, 1984.

Waugh, Scott L. England in the Reign of Edward III. Cambridge: Cambridge University Press, 1991.

Ziegler, Philip. The Black Death. London: Penguin, 1969, 1987.

Citation: Routt, David. “The Economic Impact of the Black Death”. EH.Net Encyclopedia, edited by Robert Whaples. July 20, 2008. URL http://eh.net/encyclopedia/the-economic-impact-of-the-black-death/

US Banking History, Civil War to World War II

Richard S. Grossman, Wesleyan University

The National Banking Era Begins, 1863

The National Banking Acts of 1863 and 1864

The National Banking era was ushered in by the passage of the National Currency (later renamed the National Banking) Acts of 1863 and 1864. The Acts marked a decisive change in the monetary system, confirmed a quarter-century-old trend in bank chartering arrangements, and also played a role in financing the Civil War.

Provision of a Uniform National Currency

As its original title suggests, one of the main objectives of the legislation was to provide a uniform national currency. Prior to the establishment of the national banking system, the national currency supply consisted of a confusing patchwork of bank notes issued under a variety of rules by banks chartered under different state laws. Notes of sound banks circulated side-by-side with notes of banks in financial trouble, as well as those of banks that had failed (not to mention forgeries). In fact, bank notes frequently traded at a discount, so that a one-dollar note of a smaller, less well-known bank (or, for that matter, of a bank at some distance) would likely have been valued at less than one dollar by someone receiving it in a transaction. The confusion was such as to lead to the publication of magazines that specialized in printing pictures, descriptions, and prices of various bank notes, along with information on whether or not the issuing bank was still in existence.

Under the legislation, newly created national banks were empowered to issue national bank notes backed by a deposit of US Treasury securities with their chartering agency, the Department of the Treasury’s Comptroller of the Currency. The legislation also placed a tax on notes issued by state banks, effectively driving them out of circulation. Bank notes were of uniform design and, in fact, were printed by the government. The amount of bank notes a national bank was allowed to issue depended upon the bank’s capital (which was also regulated by the act) and the amount of bonds it deposited with the Comptroller. The relationship between bank capital, bonds held, and note issue was changed by laws in 1874, 1882, and 1900 (Cagan 1963, James 1976, and Krooss 1969).

Federal Chartering of Banks

A second element of the Act was the introduction bank charters issued by the federal government. From the earliest days of the Republic, banking had been considered primarily the province of state governments.[1] Originally, individuals who wished to obtain banking charters had to approach the state legislature, which then decided if the applicant was of sufficient moral standing to warrant a charter and if the region in question needed an additional bank. These decisions may well have been influenced by bribes and political pressure, both by the prospective banker and by established bankers who may have hoped to block the entry of new competitors.

An important shift in state banking practice had begun with the introduction of free banking laws in the 1830s. Beginning with laws passed in Michigan (1837) and New York (1838), free banking laws changed the way banks obtained charters. Rather than apply to the state legislature and receive a decision on a case-by-case basis, individuals could obtain a charter by filling out some paperwork and depositing a prescribed amount of specified bonds with the state authorities. By 1860, over one half of the states had enacted some type of free banking law (Rockoff 1975). By regularizing and removing legislative discretion from chartering decisions, the National Banking Acts spread free banking on a national level.

Financing the Civil War

A third important element of the National Banking Acts was that they helped the Union government pay for the war. Adopted in the midst of the Civil War, the requirement for banks to deposit US bonds with the Comptroller maintained the demand for Union securities and helped finance the war effort.[2]

Development and Competition with State Banks

The National Banking system grew rapidly at first (Table 1). Much of the increase came at the expense of the state-chartered banking systems, which contracted over the same period, largely because they were no longer able to issue notes. The expansion of the new system did not lead to the extinction of the old: the growth of deposit-taking, combined with less stringent capital requirements, convinced many state bankers that they could do without either the ability to issue banknotes or a federal charter, and led to a resurgence of state banking in the 1880s and 1890s. Under the original acts, the minimum capital requirement for national banks was $50,000 for banks in towns with a population of 6000 or less, $100,000 for banks in cities with a population ranging from 6000 to 50,000, and $200,000 for banks in cities with populations exceeding 50,000. By contrast, the minimum capital requirement for a state bank was often as low as $10,000. The difference in capital requirements may have been an important difference in the resurgence of state banking: in 1877 only about one-fifth of state banks had a capital of less than $50,000; by 1899 the proportion was over three-fifths. Recognizing this competition, the Gold Standard Act of 1900 reduced the minimum capital necessary for national banks. It is questionable whether regulatory competition (both between states and between states and the federal government) kept regulators on their toes or encouraged a “race to the bottom,” that is, lower and looser standards.

Table 1: Numbers and Assets of National and State Banks, 1863-1913

Number of Banks Assets of Banks ($millions)
National Banks State Banks National Banks State Banks
1863 66 1466 16.8 1185.4
1864 467 1089 252.2 725.9
1865 1294 349 1126.5 165.8
1866 1634 297 1476.3 154.8
1867 1636 272 1494.5 151.9
1868 1640 247 1572.1 154.6
1869 1619 259 1564.1 156.0
1870 1612 325 1565.7 201.5
1871 1723 452 1703.4 259.6
1872 1853 566 1770.8 264.5
1873 1968 277 1851.2 178.9
1874 1983 368 1851.8 237.4
1875 2076 586 1913.2 395.2
1876 2091 671 1825.7 405.9
1877 2078 631 1774.3 506.9
1878 2056 510 1770.4 388.8
1879 2048 648 2019.8 427.6
1880 2076 650 2035.4 481.8
1881 2115 683 2325.8 575.5
1882 2239 704 2344.3 633.8
1883 2417 788 2364.8 724.5
1884 2625 852 2282.5 760.9
1885 2689 1015 2421.8 802.0
1886 2809 891 2474.5 807.0
1887 3014 1471 2636.2 1003.0
1888 3120 1523 2731.4 1055.0
1889 3239 1791 2937.9 1237.3
1890 3484 2250 3061.7 1374.6
1891 3652 2743 3113.4 1442.0
1892 3759 3359 3493.7 1640.0
1893 3807 3807 3213.2 1857.0
1894 3770 3810 3422.0 1782.0
1895 3715 4016 3470.5 1954.0
1896 3689 3968 3353.7 1962.0
1897 3610 4108 3563.4 1981.0
1898 3582 4211 3977.6 2298.0
1899 3583 4451 4708.8 2707.0
1900 3732 4659 4944.1 3090.0
1901 4165 5317 5675.9 3776.0
1902 4535 5814 6008.7 4292.0
1903 4939 6493 6286.9 4790.0
1904 5331 7508 6655.9 5244.0
1905 5668 8477 7327.8 6056.0
1906 6053 9604 7784.2 6636.0
1907 6429 10761 8476.5 7190.0
1908 6824 12062 8714.0 6898.0
1909 6926 12398 9471.7 7407.0
1910 7145 13257 9896.6 7911.0
1911 7277 14115 10383 8412.0
1912 7372 14791 10861.7 9005.0
1913 7473 15526 11036.9 9267.0

Source: U.S. Department of the Treasury. Annual Report of the Comptroller of the Currency (1931), pp. 3, 5. State bank columns include data on state-chartered commercial banks and loan and trust companies.

Capital Requirements and Interest Rates

The relatively high minimum capital requirement for national banks may have contributed to regional interest rate differentials in the post-Civil War era. The period from the Civil War through World War I saw a substantial decline in interregional interest rate differentials. According to Lance Davis (1965), the decline in difference between regional interest rates can be explained by the development and spread of the commercial paper market, which increased the interregional mobility of funds. Richard Sylla (1969) argues that the high minimum capital requirements established by the National Banking Acts represented barriers to entry and therefore led to local monopolies by note-issuing national banks. These local monopolies in capital-short regions led to the persistence of interest rate spreads.[3] (See also James 1976b.)

Bank Failures

Financial crises were a common occurrence in the National Banking era. O.M.W. Sprague (1910) classified the main financial crises during the era as occurring in 1873, 1884, 1890, 1893, and 1907, with those of 1873, 1893, and 1907 being regarded as full-fledged crises and those of 1884 and 1890 as less severe.

Contemporary observers complained of both the persistence and ill effects of bank failures under the new system.[4] The number and assets of failed national and non-national banks during the National Banking era is shown in Table 2. Suspensions — temporary closures of banks unable to meet demand for their liabilities — were even higher during this period.

Table 2: Bank Failures, 1865-1913

Number of Failed Banks Assets of Failed Banks ($millions)
National Banks Other Banks National Banks Other banks
1865 1 5 0.1 0.2
1866 2 5 1.8 1.2
1867 7 3 4.9 0.2
1868 3 7 0.5 0.2
1869 2 6 0.7 0.1
1870 0 1 0.0 0.0
1871 0 7 0.0 2.3
1872 6 10 5.2 2.1
1873 11 33 8.8 4.6
1874 3 40 0.6 4.1
1875 5 14 3.2 9.2
1876 9 37 2.2 7.3
1877 10 63 7.3 13.1
1878 14 70 6.9 26.0
1879 8 20 2.6 5.1
1880 3 10 1.0 1.6
1881 0 9 0.0 0.6
1882 3 19 6.0 2.8
1883 2 27 0.9 2.8
1884 11 54 7.9 12.9
1885 4 32 4.7 3.0
1886 8 13 1.6 1.3
1887 8 19 6.9 2.9
1888 8 17 6.9 2.8
1889 8 15 0.8 1.3
1890 9 30 2.0 10.7
1891 25 44 9.0 7.2
1892 17 27 15.1 2.7
1893 65 261 27.6 54.8
1894 21 71 7.4 8.0
1895 36 115 12.1 11.3
1896 27 78 12.0 10.2
1897 38 122 29.1 17.9
1898 7 53 4.6 4.5
1899 12 26 2.3 7.8
1900 6 32 11.6 7.7
1901 11 56 8.1 6.4
1902 2 43 0.5 7.3
1903 12 26 6.8 2.2
1904 20 102 7.7 24.3
1905 22 57 13.7 7.0
1906 8 37 2.2 6.6
1907 7 34 5.4 13.0
1908 24 132 30.8 177.1
1909 9 60 3.4 15.8
1910 6 28 2.6 14.5
1911 3 56 1.1 14.0
1912 8 55 5.0 7.8
1913 6 40 7.6 6.2

Source: U.S. Department of the Treasury. Annual Report of the Comptroller of the Currency (1931), pp. 6, 8.

The largest number of failures occurred in the years following the financial crisis of 1893. The number and assets of national and non-national bank failures remained high for four years following the crisis, a period which coincided with the free silver agitation of the mid-1890s, before returning to pre-1893 levels. Other crises were also accompanied by an increase in the number and assets of bank failures. The earliest peak during the national banking era accompanied the onset of the crisis of 1873. Failures subsequently fell, but rose again in the trough of the depression that followed the 1873 crisis. The panic of 1884 saw a slight increase in failures, while the financial stringency of 1890 was followed by a more substantial increase. Failures peaked again following several minor panics around the turn of the century and again at the time of the crisis of 1907.

Among the alleged causes of crises during the national banking era were that the money supply was not sufficiently elastic to allow for seasonal and other stresses on the money market and the fact that reserves were pyramided. That is, under the National Banking Acts, a portion of banks’ required reserves could be held in national banks in larger cities (“reserve city banks”). Reserve city banks could, in turn, hold a portion of their required reserves in “central reserve city banks,” national banks in New York, Chicago, and St. Louis. In practice, this led to the build-up of reserve balances in New York City. Increased demands for funds in the interior of the country during the autumn harvest season led to substantial outflows of funds from New York, which contributed to tight money market conditions and, sometimes, to panics (Miron 1986).[5]

Attempted Remedies for Banking Crises

Causes of Bank Failures

Bank failures occur when banks are unable to meet the demands of their creditors (in earlier times these were note holders; later on, they were more often depositors). Banks typically do not hold 100 percent of their liabilities in reserves, instead holding some fraction of demandable liabilities in reserves: as long as the flows of funds into and out of the bank are more or less in balance, the bank is in little danger of failing. A withdrawal of deposits that exceeds the bank’s reserves, however, can lead to the banks’ temporary suspension (inability to pay) or, if protracted, failure. The surge in withdrawals can have a variety of causes including depositor concern about the bank’s solvency (ability to pay depositors), as well as worries about other banks’ solvency that lead to a general distrust of all banks.[6]

Clearinghouses

Bankers and policy makers attempted a number of different responses to banking panics during the National Banking era. One method of dealing with panics was for the bankers of a city to pool their resources, through the local bankers’ clearinghouse and to jointly guarantee the payment of every member banks’ liabilities (see Gorton (1985a, b)).

Deposit Insurance

Another method of coping with panics was deposit insurance. Eight states (Oklahoma, Kansas, Nebraska, Texas, Mississippi, South Dakota, North Dakota, and Washington) adopted deposit insurance systems between 1908 and 1917 (six other states had adopted some form of deposit insurance in the nineteenth century: New York, Vermont, Indiana, Michigan, Ohio, and Iowa). These systems were not particularly successful, in part because they lacked diversification: because these systems operated statewide, when a panic fell full force on a state, deposit insurance system did not have adequate resources to handle each and every failure. When the agricultural depression of the 1920s hit, a number of these systems failed (Federal Deposit Insurance Corporation 1988).

Double Liability

Another measure adopted to curtail bank risk-taking, and through risk-taking, bank failures, was double liability (Grossman 2001). Under double liability, shareholders who had invested in banks that failed were liable to lose not only the money they had invested, but could be called on by a bank’s receiver to contribute an additional amount equal to the par value of the shares (hence the term “double liability,” although clearly the loss to the shareholder need not have been double if the par and market values of shares were different). Other states instituted triple liability, where the receiver could call on twice the par value of shares owned. Still others had unlimited liability, while others had single, or regular limited, liability.[7] It was argued that banks with double liability would be more risk averse, since shareholders would be liable for a greater payment if the firm went bankrupt.

By 1870, multiple (i.e., double, triple, and unlimited) liability was already the rule for state banks in eighteen states, principally in the Midwest, New England, and Middle Atlantic regions, as well as for national banks. By 1900, multiple liability was the law for state banks in thirty-two states. By this time, the main pockets of single liability were in the south and west. By 1930, only four states had single liability.

Double liability appears to have been successful (Grossman 2001), at least during less-than-turbulent times. During the 1890-1930 period, state banks in states where banks were subject to double (or triple, or unlimited) liability typically undertook less risk than their counterparts in single (limited) liability states in normal years. However, in years in which bank failures were quite high, banks in multiple liability states appeared to take more risk than their limited liability counterparts. This may have resulted from the fact that legislators in more crisis-prone states were more likely to have already adopted double liability. Whatever its advantages or disadvantages, the Great Depression spelled the end of double liability: by 1941, virtually every state had repealed double liability for state-chartered banks.

The Crisis of 1907 and Founding of the Federal Reserve

The crisis of 1907, which had been brought under control by a coalition of trust companies and other chartered banks and clearing-house members led by J.P. Morgan, led to a reconsideration of the monetary system of the United States. Congress set up the National Monetary Commission (1908-12), which undertook a massive study of the history of banking and monetary arrangements in the United States and in other economically advanced countries.[8]

The eventual result of this investigation was the Federal Reserve Act (1913), which established the Federal Reserve System as the central bank of the US. Unlike other countries that had one central bank (e.g., Bank of England, Bank of France), the Federal Reserve Act provided for a system of between eight and twelve reserve banks (twelve were eventually established under the act, although during debate over the act, some had called for as many as one reserve bank per state). This provision, like the rejection of the first two attempts at a central bank, resulted, in part, from American’s antipathy towards centralized monetary authority. The Federal Reserve was established to manage the monetary affairs of the country, to hold the reserves of banks and to regulate the money supply. At the time of its founding each of the reserve banks had a high degree of independence. As a result of the crises surrounding the Great Depression, Congress passed the Banking Act of 1935, which, among other things, centralized Federal Reserve power (including the power to engage in open market operations) in a Washington-based Board of Governors (and Federal Open Market Committee), relegating the heads of the individual reserve banks to a more consultative role in the operation of monetary policy.

The Goal of an “Elastic Currency”

The stated goals of the Federal Reserve Act were: ” . . . to furnish an elastic currency, to furnish the means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes.” Furnishing an “elastic currency” was important goal of the act, since none of the components of the money supply (gold and silver certificates, national bank notes) were able to expand or contract particularly rapidly. The inelasticity of the money supply, along with the seasonal fluctuations in money demand had led to a number of the panics of the National Banking era. These panic-inducing seasonal fluctuations resulted from the large flows of money out of New York and other money centers to the interior of the country to pay for the newly harvested crops. If monetary conditions were already tight before the drain of funds to the nation’s interior, the autumnal movement of funds could — and did –precipitate panics.[9]

Growth of the Bankers’ Acceptance Market

The act also fostered the growth of the bankers’ acceptance market. Bankers’ acceptances were essentially short-dated IOUs, issued by banks on behalf of clients that were importing (or otherwise purchasing) goods. These acceptances were sent to the seller who could hold on to them until they matured, and receive the face value of the acceptance, or could discount them, that is, receive the face value minus interest charges. By allowing the Federal Reserve to rediscount commercial paper, the act facilitated the growth of this short-term money market (Warburg 1930, Broz 1997, and Federal Reserve Bank of New York 1998). In the 1920s, the various Federal Reserve banks began making large-scale purchases of US Treasury obligations, marking the beginnings of Federal Reserve open market operations.[10]

The Federal Reserve and State Banking

The establishment of the Federal Reserve did not end the competition between the state and national banking systems. While national banks were required to be members of the new Federal Reserve System, state banks could also become members of the system on equal terms. Further, the Federal Reserve Act, bolstered by the Act of June 21, 1917, ensured that state banks could become member banks without losing any competitive advantages they might hold over national banks. Depending upon the state, state banking law sometimes gave state banks advantages in the areas of branching,[11] trust operations,[12] interlocking managements, loan and investment powers,[13] safe deposit operations, and the arrangement of mergers.[14] Where state banking laws were especially liberal, banks had an incentive to give up their national bank charter and seek admission to the Federal Reserve System as a state member bank.

McFadden Act

The McFadden Act (1927) addressed some of the competitive inequalities between state and national banks. It gave national banks charters of indeterminate length, allowing them to compete with state banks for trust business. It expanded the range of permissible investments, including real estate investment and allowed investment in the stock of safe deposit companies. The Act greatly restricted the ability of member banks — whether state or nationally chartered — from opening or maintaining out-of-town branches.

The Great Depression: Panic and Reform

The Great Depression was the longest, most severe economic downturn in the history of the United States.[15] The banking panics of 1930, 1931, and 1933 were the most severe banking disruption ever to hit the United States, with more than one quarter of all banks closing. Data on the number of bank suspensions during this period is presented in Table 3.

Table 3: Bank Suspensions, 1921-33

Number of Bank Suspensions
All Banks National Banks
1921 505 52
1922 367 49
1923 646 90
1924 775 122
1925 618 118
1926 976 123
1927 669 91
1928 499 57
1929 659 64
1930 1352 161
1931 2294 409
1932 1456 276
1933 5190 1475

Source: Bremer (1935).

Note: 1933 figures include 4507 non-licensed banks (1400 non-licensed national banks). Non-licensed banks consist of banks operating on a restricted basis or not in operation, but not in liquidation or receivership.

The first banking panic erupted in October 1930. According to Friedman and Schwartz (1963, pp. 308-309), it began with failures in Missouri, Indiana, Illinois, Iowa, Arkansas, and North Carolina and quickly spread to other areas of the country. Friedman and Schwartz report that 256 banks with $180 million of deposits failed in November 1930, while 352 banks with over $370 million of deposits failed in the following month (the largest of which was the Bank of United States which failed on December 11 with over $200 million of deposits). The second banking panic began in March of 1931 and continued into the summer.[16] The third and final panic began at the end of 1932 and persisted into March of 1933. During the early months of 1933, a number of states declared banking holidays, allowing banks to close their doors and therefore freeing them from the requirement to redeem deposits. By the time President Franklin Delano Roosevelt was inaugurated on March 4, 1933, state-declared banking holidays were widespread. The following day, the president declared a national banking holiday.

Beginning on March 13, the Secretary of the Treasury began granting licenses to banks to reopen for business.

Federal Deposit Insurance

The crises led to the implementation of several major reforms in banking. Among the most important of these was the introduction of federal deposit insurance under the Banking (Glass-Steagall) Act of 1933. Originally an explicitly temporary program, the Act established the Federal Deposit Insurance Corporation (the FDIC was made permanent by the Banking Act of 1935); insurance became effective January 1, 1934. Member banks of the Federal Reserve (which included all national banks) were required to join FDIC. Within six months, 14,000 out of 15,348 commercial banks, representing 97 percent of bank deposits had subscribed to federal deposit insurance (Friedman and Schwartz, 1963, 436-437).[17] Coverage under the initial act was limited to a maximum of $2500 of deposits for each depositor. Table 4 documents the increase in the limit from the act’s inception until 1980, when it reached its current $100,000 level.

Table 4: FDIC Insurance Limit

1934 (January) $2500
1934 (July) $5000
1950 $10,000
1966 $15,000
1969 $20,000
1974 $40,000
1980 $100,000
Source: http://www.fdic.gov/

Additional Provisions of the Glass-Steagall Act

An important goal of the New Deal reforms was to enhance the stability of the banking system. Because the involvement of commercial banks in securities underwriting was seen as having contributed to banking instability, the Glass-Steagall Act of 1933 forced the separation of commercial and investment banking.[18] Additionally, the Acts (1933 for member banks, 1935 for other insured banks) established Regulation Q, which forbade banks from paying interest on demand deposits (i.e., checking accounts) and established limits on interest rates paid to time deposits. It was argued that paying interest on demand deposits introduced unhealthy competition.

Recent Responses to New Deal Banking Laws

In a sense, contemporary debates on banking policy stem largely from the reforms of the post-Depression era. Although several of the reforms introduced in the wake of the 1931-33 crisis have survived into the twenty-first century, almost all of them have been subject to intense scrutiny in the last two decades. For example, several court decisions, along with the Financial Services Modernization Act (Gramm-Leach-Bliley) of 1999, have blurred the previously strict separation between different financial service industries (particularly, although not limited to commercial and investment banking).

FSLIC

The Savings and Loan crisis of the 1980s, resulting from a combination of deposit insurance-induced moral hazard and deregulation, led to the dismantling of the Depression-era Federal Savings and Loan Insurance Corporation (FSLIC) and the transfer of Savings and Loan insurance to the Federal Deposit Insurance Corporation.

Further Reading

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in Propagation of the Great Depression.” American Economic Review 73 (1983): 257-76.

Bordo, Michael D., Claudia Goldin, and Eugene N. White, editors. The Defining Moment: The Great Depression and the American Economy in the Twentieth Century. Chicago: University of Chicago Press, 1998.

Bremer, C. D. American Bank Failures. New York: Columbia University Press, 1935.

Broz, J. Lawrence. The International Origins of the Federal Reserve System. Ithaca: Cornell University Press, 1997.

Cagan, Phillip. “The First Fifty Years of the National Banking System: An Historical Appraisal.” In Banking and Monetary Studies, edited by Deane Carson, 15-42. Homewood: Richard D. Irwin, 1963.

Cagan, Phillip. The Determinants and Effects of Changes in the Stock of Money. New York: National Bureau of Economic Research, 1065.

Calomiris, Charles W. and Gorton, Gary. “The Origins of Banking Panics: Models, Facts, and Bank Regulation.” In Financial Markets and Financial Crises, edited by Glenn R. Hubbard, 109-73. Chicago: University of Chicago Press, 1991.

Davis, Lance. “The Investment Market, 1870-1914: The Evolution of a National Market.” Journal of Economic History 25 (1965): 355-399.

Dewald, William G. “ The National Monetary Commission: A Look Back.”

Journal of Money, Credit and Banking 4 (1972): 930-956.

Eichengreen, Barry. “Mortgage Interest Rates in the Populist Era.” American Economic Review 74 (1984): 995-1015.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939, Oxford: Oxford University Press, 1992.

Federal Deposit Insurance Corporation. “A Brief History of Deposit Insurance in the United States.” Washington: FDIC, 1998. http://www.fdic.gov/bank/historical/brief/brhist.pdf

Federal Reserve. The Federal Reserve: Purposes and Functions. Washington: Federal Reserve Board, 1994. http://www.federalreserve.gov/pf/pdf/frspurp.pdf

Federal Reserve Bank of New York. U.S. Monetary Policy and Financial Markets.

New York, 1998. http://www.ny.frb.org/pihome/addpub/monpol/chapter2.pdf

Friedman, Milton and Anna J. Schawtz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodhart, C.A.E. The New York Money Market and the Finance of Trade, 1900-1913. Cambridge: Harvard University Press, 1969.

Gorton, Gary. “Bank Suspensions of Convertibility.” Journal of Monetary Economics 15 (1985a): 177-193.

Gorton, Gary. “Clearing Houses and the Origin of Central Banking in the United States.” Journal of Economic History 45 (1985b): 277-283.

Grossman, Richard S. “Deposit Insurance, Regulation, Moral Hazard in the Thrift Industry: Evidence from the 1930s.” American Economic Review 82 (1992): 800-821.

Grossman, Richard S. “The Macroeconomic Consequences of Bank Failures under the National Banking System.” Explorations in Economic History 30 (1993): 294-320.

Grossman, Richard S. “The Shoe That Didn’t Drop: Explaining Banking Stability during the Great Depression.” Journal of Economic History 54, no. 3 (1994): 654-82.

Grossman, Richard S. “Double Liability and Bank Risk-Taking.” Journal of Money, Credit, and Banking 33 (2001): 143-159.

James, John A. “The Conundrum of the Low Issue of National Bank Notes.” Journal of Political Economy 84 (1976a): 359-67.

James, John A. “The Development of the National Money Market, 1893-1911.” Journal of Economic History 36 (1976b): 878-97.

Kent, Raymond P. “Dual Banking between the Two Wars.” In Banking and Monetary Studies, edited by Deane Carson, 43-63. Homewood: Richard D. Irwin, 1963.

Kindleberger, Charles P. Manias, Panics, and Crashes: A History of Financial Crises. New York: Basic Books, 1978.

Krooss, Herman E., editor. Documentary History of Banking and Currency in the United States. New York: Chelsea House Publishers, 1969.

Minsky, Hyman P. Can ‘It” Happen Again? Essays on Instability and Finance. Armonk, NY: M.E. Sharpe, 1982.

Miron , Jeffrey A. “Financial Panics, the Seasonality of the Nominal Interest Rate, and the Founding of the Fed.” American Economic Review 76 (1986): 125-38.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard, 69-108. Chicago: University of Chicago Press, 1991.

Rockoff, Hugh. The Free Banking Era: A Reexamination. New York: Arno Press, 1975.

Rockoff, Hugh. “Banking and Finance, 1789-1914.” In The Cambridge Economic History of the United States. Volume 2. The Long Nineteenth Century, edited by Stanley L Engerman and Robert E. Gallman, 643-84. New York: Cambridge University Press, 2000.

Sprague, O. M. W. History of Crises under the National Banking System. Washington, DC: Government Printing Office, 1910.

Sylla, Richard. “Federal Policy, Banking Market Structure, and Capital Mobilization in the United States, 1863-1913.” Journal of Economic History 29 (1969): 657-686.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge: MIT Press, 1989.

Warburg,. Paul M. The Federal Reserve System: Its Origin and Growth: Reflections and Recollections, 2 volumes. New York: Macmillan, 1930.

White, Eugene N. The Regulation and Reform of American Banking, 1900-1929. Princeton: Princeton University Press, 1983.

White, Eugene N. “Before the Glass-Steagall Act: An Analysis of the Investment Banking Activities of National Banks.” Explorations in Economic History 23 (1986) 33-55.

White, Eugene N. “Banking and Finance in the Twentieth Century.” In The Cambridge Economic History of the United States. Volume 3. The Twentieth Century, edited by Stanley L.Engerman and Robert E. Gallman, 743-802. New York: Cambridge University Press, 2000.

Wicker, Elmus. The Banking Panics of the Great Depression. New York: Cambridge University Press, 1996.

Wicker, Elmus. Banking Panics of the Gilded Age. New York: Cambridge University Press, 2000.


[1] The two exceptions were the First and Second Banks of the United States. The First Bank, which was chartered by Congress at the urging of Alexander Hamilton, in 1791, was granted a 20-year charter, which Congress allowed to expire in 1811. The Second Bank was chartered just five years after the expiration of the first, but Andrew Jackson vetoed the charter renewal in 1832 and the bank ceased to operate with a national charter when its 20-year charter expired in 1836. The US remained without a central bank until the founding of the Federal Reserve in 1914. Even then, the Fed was not founded as one central bank, but as a collection of twelve regional reserve banks. American suspicion of concentrated financial power has not been limited to central banking: in contrast to the rest of the industrialized world, twentieth century US banking was characterized by large numbers of comparatively small, unbranched banks.

[2] The relationship between the enactment of the National Bank Acts and the Civil War was perhaps even deeper. Hugh Rockoff suggested the following to me: “There were western states where the banking system was in trouble because the note issue was based on southern bonds, and people in those states were looking to the national government to do something. There were also conservative politicians who were afraid that they wouldn’t be able to get rid of the greenback (a perfectly uniform [government issued wartime] currency) if there wasn’t a private alternative that also promised uniformity…. It has even been claimed that by setting up a national system, banks in the South were undermined — as a war measure.”

[3] Eichengreen (1984) argues that regional mortgage interest rate differentials resulted from differences in risk.

[4] There is some debate over the direction of causality between banking crises and economic downturns. According to monetarists Friedman and Schwartz (1963) and Cagan (1965), the monetary contraction associated with bank failures magnifies real economic downturns. Bernanke (1983) argues that bank failures raise the cost of credit intermediation and therefore have an effect on the real economy through non-monetary channels. An alternative view, articulated by Sprague (1910), Fisher (1933), Temin (1976), Minsky (1982), and Kindleberger (1978), maintains that bank failures and monetary contraction are primarily a consequence, rather than a cause, of sluggishness in the real economy which originates in non-monetary sources. See Grossman (1993) for a summary of this literature.

[5] See Calomiris and Gorton (1991) for an alternative view.

[6] See Mishkin (1991) on asymmetric information and financial crises.

[7] Still other states had “voluntary liability,” whereby each bank could choose single or double liability.

[8] See Dewald (1972) on the National Monetary Commission.

[9] Miron (1986) demonstrates the decline in the seasonality of interest rates following the founding of the Fed.

[10] Other Fed activities included check clearing.

[11] According to Kent (1963, pp. 48), starting in 1922 the Comptroller allowed national banks to open “offices” to receive deposits, cash checks, and receive applications for loans in head office cities of states that allowed state-chartered banks to establish branches.

[12] Prior to 1922, national bank charters had lives of only 20 years. This severely limited their ability to compete with state banks in the trust business. (Kent 1963, p. 49)

[13] National banks were subject to more severe limitations on lending than most state banks. These restrictions included a limit on the amount that could be loaned to one borrower as well as limitations on real estate lending. (Kent 1963, pp. 50-51)

[14] Although the Bank Consolidation Act of 1918 provided for the merger of two or more national banks, it made no provision for the merger of a state and national bank. Kent (1963, p. 51).

[15] References touching on banking and financial aspects of the Great Depression in the United States include Friedman and Schwartz (1963), Temin (1976, 1989), Kindleberger (1978), Bernanke (1983), Eichangreen (1992), and Bordo, Goldin, and White (1998).

[16] During this period, the failures of the Credit-Anstalt, Austria’s largest bank, and the Darmstädter und Nationalbank (Danat Bank), a large German bank, inaugurated the beginning of financial crisis in Europe. The European financial crisis led to Britain’s suspension of the gold standard in September 1931. See Grossman (1994) on the European banking crisis of 1931. The best source on the gold standard in the interwar years is Eichengreen (1992).

[17] Interestingly, federal deposit insurance was made optional for savings and loan institutions at about the same time. The majority of S&L’s did not elect to adopt deposit insurance until after 1950. See Grossman (1992).

[18] See, however, White (1986) for

Citation: Grossman, Richard. “US Banking History, Civil War to World War II”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/us-banking-history-civil-war-to-world-war-ii/

The Economic History of Australia from 1788: An Introduction

Bernard Attard, University of Leicester

Introduction

The economic benefits of establishing a British colony in Australia in 1788 were not immediately obvious. The Government’s motives have been debated but the settlement’s early character and prospects were dominated by its original function as a jail. Colonization nevertheless began a radical change in the pattern of human activity and resource use in that part of the world, and by the 1890s a highly successful settler economy had been established on the basis of a favorable climate in large parts of the southeast (including Tasmania ) and the southwest corner; the suitability of land for European pastoralism and agriculture; an abundance of mineral wealth; and the ease with which these resources were appropriated from the indigenous population. This article will focus on the creation of a colonial economy from 1788 and its structural change during the twentieth century. To simplify, it will divide Australian economic history into four periods, two of which overlap. These are defined by the foundation of the ‘bridgehead economy’ before 1820; the growth of a colonial economy between 1820 and 1930; the rise of manufacturing and the protectionist state between 1891 and 1973; and the experience of liberalization and structural change since 1973. The article will conclude by suggesting briefly some of the similarities between Australia and other comparable settler economies, as well as the ways in which it has differed from them.

The Bridgehead Economy, 1788-1820

The description ‘bridgehead economy’ was used by one of Australia’s foremost economic historians, N. G. Butlin to refer to the earliest decades of British occupation when the colony was essentially a penal institution. The main settlements were at Port Jackson (modern Sydney, 1788) in New South Wales and Hobart (1804) in what was then Van Diemen’s Land (modern Tasmania). The colony barely survived its first years and was largely neglected for much of the following quarter-century while the British government was preoccupied by the war with France. An important beginning was nevertheless made in the creation of a private economy to support the penal regime. Above all, agriculture was established on the basis of land grants to senior officials and emancipated convicts, and limited freedoms were allowed to convicts to supply a range of goods and services. Although economic life depended heavily on the government Commissariat as a supplier of goods, money and foreign exchange, individual rights in property and labor were recognized, and private markets for both started to function. In 1808, the recall of the New South Wales Corps, whose officers had benefited most from access to land and imported goods (thus hopelessly entangling public and private interests), coupled with the appointment of a new governor, Lachlan Macquarie, in the following year, brought about a greater separation of the private economy from the activities and interests of the colonial government. With a significant increase in the numbers transported after 1810, New South Wales’ future became more secure. As laborers, craftsmen, clerks and tradesmen, many convicts possessed the skills required in the new settlements. As their terms expired, they also added permanently to the free population. Over time, this would inevitably change the colony’s character.

Natural Resources and the Colonial Economy, 1820-1930

Pastoral and Rural Expansion

For Butlin, the developments around 1810 were a turning point in the creation of a ‘colonial’ economy. Many historians have preferred to view those during the 1820s as more significant. From that decade, economic growth was based increasingly upon the production of fine wool and other rural commodities for markets in Britain and the industrializing economies of northwestern Europe. This growth was interrupted by two major depressions during the 1840s and 1890s and stimulated in complex ways by the rich gold discoveries in Victoria in 1851, but the underlying dynamics were essentially unchanged. At different times, the extraction of natural resources, whether maritime before the 1840s or later gold and other minerals, was also important. Agriculture, local manufacturing and construction industries expanded to meet the immediate needs of growing populations, which concentrated increasingly in the main urban centers. The colonial economy’s structure, growth of population and significance of urbanization are illustrated in tables 1 and 2. The opportunities for large profits in pastoralism and mining attracted considerable amounts of British capital, while expansion generally was supported by enormous government outlays for transport, communication and urban infrastructures, which also depended heavily on British finance. As the economy expanded, large-scale immigration became necessary to satisfy the growing demand for workers, especially after the end of convict transportation to the eastern mainland in 1840. The costs of immigration were subsidized by colonial governments, with settlers coming predominantly from the United Kingdom and bringing skills that contributed enormously to the economy’s growth. All this provided the foundation for the establishment of free colonial societies. In turn, the institutions associated with these — including the rule of law, secure property rights, and stable and democratic political systems — created conditions that, on balance, fostered growth. In addition to New South Wales, four other British colonies were established on the mainland: Western Australia (1829), South Australia (1836), Victoria (1851) and Queensland (1859). Van Diemen’s Land (Tasmania after 1856) became a separate colony in 1825. From the 1850s, these colonies acquired responsible government. In 1901, they federated, creating the Commonwealth of Australia.

Table 1
The Colonial Economy: Percentage Shares of GDP, 1891 Prices, 1861-1911

Pastoral Other rural Mining Manuf. Building Services Rent
1861 9.3 13.0 17.5 14.2 8.4 28.8 8.6
1891 16.1 12.4 6.7 16.6 8.5 29.2 10.3
1911 14.8 16.7 9.0 17.1 5.3 28.7 8.3

Source: Haig (2001), Table A1. Totals do not sum to 100 because of rounding.

Table 2
Colonial Populations (thousands), 1851-1911

Australia Colonies Cities
NSW Victoria Sydney Melbourne
1851 257 100 46 54 29
1861 669 198 328 96 125
1891 1,704 608 598 400 473
1911 2,313 858 656 648 593

Source: McCarty (1974), p. 21; Vamplew (1987), POP 26-34.

The process of colonial growth began with two related developments. First, in 1820, Macquarie responded to land pressure in the districts immediately surrounding Sydney by relaxing restrictions on settlement. Soon the outward movement of herdsmen seeking new pastures became uncontrollable. From the 1820s, the British authorities also encouraged private enterprise by the wholesale assignment of convicts to private employers and easy access to land. In 1831, the principles of systematic colonization popularized by Edward Gibbon Wakefield (1796-1862) were put into practice in New South Wales with the substitution of land sales for grants in order to finance immigration. This, however, did not affect the continued outward movement of pastoralists who simply occupied land where could find it beyond the official limits of settlement. By 1840, they had claimed a vast swathe of territory two hundred miles in depth running from Moreton Bay in the north (the site of modern Brisbane) through the Port Phillip District (the future colony of Victoria, whose capital Melbourne was marked out in 1837) to Adelaide in South Australia. The absence of any legal title meant that these intruders became known as ‘squatters’ and the terms of their tenure were not finally settled until 1846 after a prolonged political struggle with the Governor of New South Wales, Sir George Gipps.

The impact of the original penal settlements on the indigenous population had been enormous. The consequences of squatting after 1820 were equally devastating as the land and natural resources upon which indigenous hunter-gathering activities and environmental management depended were appropriated on a massive scale. Aboriginal populations collapsed in the face of disease, violence and forced removal until they survived only on the margins of the new pastoral economy, on government reserves, or in the arid parts of the continent least touched by white settlement. The process would be repeated again in northern Australia during the second half of the century.

For the colonists this could happen because Australia was considered terra nullius, vacant land freely available for occupation and exploitation. The encouragement of private enterprise, the reception of Wakefieldian ideas, and the wholesale spread of white settlement were all part of a profound transformation in official and private perceptions of Australia’s prospects and economic value as a British colony. Millennia of fire-stick management to assist hunter-gathering had created inland grasslands in the southeast that were ideally suited to the production of fine wool. Both the physical environment and the official incentives just described raised expectations of considerable profits to be made in pastoral enterprise and attracted a growing stream of British capital in the form of organizations like the Australian Agricultural Company (1824); new corporate settlements in Western Australia (1829) and South Australia (1836); and, from the 1830s, British banks and mortgage companies formed to operate in the colonies. By the 1830s, wool had overtaken whale oil as the colony’s most important export, and by 1850 New South Wales had displaced Germany as the main overseas supplier to British industry (see table 3). Allowing for the colonial economy’s growing complexity, the cycle of growth based upon land settlement, exports and British capital would be repeated twice. The first pastoral boom ended in a depression which was at its worst during 1842-43. Although output continued to grow during the 1840s, the best land had been occupied in the absence of substantial investment in fencing and water supplies. Without further geographical expansion, opportunities for high profits were reduced and the flow of British capital dried up, contributing to a wider downturn caused by drought and mercantile failure.

Table 3
Imports of Wool into Britain (thousands of bales), 1830-50

German Australian
1830 74.5 8.0
1840 63.3 41.0
1850 30.5 137.2

Source: Sinclair (1976), p. 46

When pastoral growth revived during the 1860s, borrowed funds were used to fence properties and secure access to water. This in turn allowed a further extension of pastoral production into the more environmentally fragile semi-arid interior districts of New South Wales, particularly during the 1880s. As the mobs of sheep moved further inland, colonial governments increased the scale of their railway construction programs, some competing to capture the freight to ports. Technical innovation and government sponsorship of land settlement brought greater diversity to the rural economy (see table 4). Exports of South Australian wheat started in the 1870s. The development of drought resistant grain varieties from the turn of the century led to an enormous expansion of sown acreage in both the southeast and southwest. From the 1880s, sugar production increased in Queensland, although mainly for the domestic market. From the 1890s, refrigeration made it possible to export meat, dairy products and fruit.

Table 4
Australian Exports (percentages of total value of exports), 1881-1928/29

Wool Minerals Wheat,flour Butter Meat Fruit
1881-90 54.1 27.2 5.3 0.1 1.2 0.2
1891-1900 43.5 33.1 2.9 2.4 4.1 0.3
1901-13 34.3 35.4 9.7 4.1 5.1 0.5
1920/21-1928/29 42.9 8.8 20.5 5.6 4.6 2.2

Source: Sinclair (1976), p. 166

Gold and Its Consequences

Alongside rural growth and diversification, the remarkable gold discoveries in central Victoria in 1851 brought increased complexity to the process of economic development. The news sparked an immediate surge of gold seekers into the colony, which was soon reinforced by a flood of overseas migrants. Until the 1870s, gold displaced wool as Australia’s most valuable export. Rural industries either expanded output (wheat in South Australia) or, in the case of pastoralists, switched production to meat and tallow, to supply a much larger domestic market. Minerals had been extracted since earliest settlement and, while yields on the Victorian gold fields soon declined, rich mineral deposits continued to be found. During the 1880s alone these included silver, lead and zinc at Broken Hill in New South Wales; copper at Mount Lyell in Tasmania; and gold at Charters Towers and Mount Morgan in Queensland. From 1893, what eventually became the richest goldfields in Australia were discovered at Coolgardie in Western Australia. The mining industry’s overall contribution to output and exports is illustrated in tables 1 and 4.

In Victoria, the deposits of easily extracted alluvial gold were soon exhausted and mining was taken over by companies that could command the financial and organizational resources needed to work the deep lodes. But the enormous permanent addition to the colonial population caused by the gold rush had profound effects throughout eastern Australia, dramatically accelerating the growth of the local market and workforce, and deeply disturbing the social balance that had emerged during the decade before. Between 1851 and 1861, the Australian population more than doubled. In Victoria it increased sevenfold; Melbourne outgrew Sydney, Chicago and San Francisco (see table 2). Significantly enlarged populations required social infrastructure, political representation, employment and land; and the new colonial legislatures were compelled to respond. The way this was played out varied between colonies but the common outcomes were the introduction of manhood suffrage, access to land through ‘free selection’ of small holdings, and, in the Victorian case, the introduction of a protectionist tariff in 1865. The particular age structure of the migrants of the 1850s also had long-term effects on the building cycle, notably in Victoria. The demand for housing accelerated during the 1880s, as the children of the gold generation matured and established their own households. With pastoral expansion and public investment also nearing their peaks, the colony experienced a speculative boom which added to the imbalances already being caused by falling export prices and rising overseas debt. The boom ended with the wholesale collapse of building companies, mortgage banks and other financial institutions during 1891-92 and the stoppage of much of the banking system during 1893.

The depression of the 1890s was worst in Victoria. Its impact on employment was softened by the Western Australian gold discoveries, which drew population away, but the colonial economy had grown to such an extent since the 1850s that the stimulus provided by the earlier gold finds could not be repeated. Severe drought in eastern Australia from the mid-1890s until 1903 caused the pastoral industry to contract. Yet, as we have seen, technological innovation also created opportunities for other rural producers, who were now heavily supported by government with little direct involvement by foreign investors. The final phase of rural expansion, with its associated public investment in rural (and increasingly urban) infrastructure continued until the end of the 1920s. Yields declined, however, as farmers moved onto the most marginal land. The terms of trade also deteriorated with the oversupply of several commodities in world markets after the First World War. As a result, the burden of servicing foreign debt rose once again. Australia’s position as a capital importer and exporter of natural resources meant that the Great Depression arrived early. From late 1929, the closure of overseas capital markets and collapse of export prices forced the Federal Government to take drastic measures to protect the balance of payments. The falls in investment and income transmitted the contraction to the rest of the economy. By 1932, average monthly unemployment amongst trade union members was over 22 percent. Although natural resource industries continued to have enduring importance as earners of foreign exchange, the Depression finally ended the long period in which land settlement and technical innovation had together provided a secure foundation for economic growth.

Manufacturing and the Protected Economy, 1891-1973

The ‘Australian Settlement’

There is a considerable chronological overlap between the previous section, which surveyed the growth of a colonial economy during the nineteenth century based on the exploitation of natural resources, and this one because it is a convenient way of approaching the two most important developments in Australian economic history between Federation and the 1970s: the enormous increase in government regulation after 1901 and, closely linked to this, the expansion of domestic manufacturing, which from the Second World War became the most dynamic part of the Australian economy.

The creation of the Commonwealth of Australia on 1 January 1901 broadened the opportunities for public intervention in private markets. The new Federal Government was given clearly-defined but limited powers over obviously ‘national’ matters like customs duties. The rest, including many affecting economic development and social welfare, remained with the states. The most immediate economic consequence was the abolition of inter-colonial tariffs and the establishment of a single Australian market. But the Commonwealth also soon set about transferring to the national level several institutions that different the colonies had experimented with during the 1890s. These included arrangements for the compulsory arbitration of industrial disputes by government tribunals, which also had the power to fix wages, and a discriminatory ‘white Australia’ immigration policy designed to exclude non-Europeans from the labor market. Both were partly responses to organized labor’s electoral success during the 1890s. Urban business and professional interests had always been represented in colonial legislatures; during the 1910s, rural producers also formed their own political parties. Subsequently, state and federal governments were typically formed by the either Australian Labor Party or coalitions of urban conservatives and the Country Party. The constituencies they each represented were thus able to influence the regulatory structure to protect themselves against the full impact of market outcomes, whether in the form of import competition, volatile commodity prices or uncertain employment conditions. The institutional arrangements they created have been described as the ‘Australian settlement’ because they balanced competing producer interests and arguably provided a stable framework for economic development until the 1970s, despite the inevitable costs.

The Growth of Manufacturing

An important part of the ‘Australian settlement’ was the imposition of a uniform federal tariff and its eventual elaboration into a system of ‘protection all round’. The original intended beneficiaries were manufacturers and their employees; indeed, when the first protectionist tariff was introduced in 1907, its operation was linked to the requirement that employers pay their workers ‘fair and reasonable wages’. Manufacturing’s actual contribution to economic growth before Federation has been controversial. The population influx of the 1850s widened opportunities for import-substitution but the best evidence suggests that manufacturing grew slowly as the industrial workforce increased (see table 1). Production was small-scale and confined largely to the processing of rural products and raw materials; assembly and repair-work; or the manufacture of goods for immediate consumption (e.g. soap and candle-making, brewing and distilling). Clothing and textile output was limited to a few lines. For all manufacturing, growth was restrained by the market’s small size and the limited opportunities for technical change it afforded.

After Federation, production was stimulated by several factors: rural expansion, the increasing use of agricultural machinery and refrigeration equipment, and the growing propensity of farm incomes to be spent locally. The removal of inter-colonial tariffs may also have helped. The statistical evidence indicates that between 1901 and the outbreak of the First World War manufacturing grew faster than the economy as a whole, while output per worker increased. But manufacturers also aspired mainly to supply the domestic market and expended increasing energy on retaining privileged access. Tariffs rose considerably between the two world wars. Some sectors became more capital intensive, particularly with the establishment of a local steel industry, the beginnings of automobile manufacture, and the greater use of electricity. But, except during the first half of the 1920s, there was little increase in labor productivity and the inter-war expansion of textile manufacturing reflected the heavy bias towards import substitution. Not until the Second World War and after did manufacturing growth accelerate and extend to those sectors most characteristic of an advance industrial economy (table 5). Amongst these were automobiles, chemicals, electrical and electronic equipment, and iron-and-steel. Growth was sustained during 1950s by similar factors to those operating in other countries during the ‘long boom’, including a growing stream of American direct investment, access to new and better technology, and stable conditions of full employment.

Table 5
Manufacturing and the Australian Economy, 1913-1949

1938-39 prices
Manufacturing share of GDP % Manufacturing annual rate of growth % GDP, annual rate of growth %
1913/14 21.9
1928/29 23.6 2.6 2.1
1948/49 29.8 3.4 2.2

Calculated from Haig (2001), Table A2. Rates of change are average annual changes since the previous year in the first column.

Manufacturing peaked in the mid-1960s at about 28 percent of national output (measured in 1968-69 prices) but natural resource industries remained the most important suppliers of exports. Since the 1920s, over-supply in world markets and the need to compensate farmers for manufacturing protection, had meant that virtually all rural industries, with the exception of wool, had been drawn into a complicated system of subsidies, price controls and market interventions at both federal and state levels. The post-war boom in the world economy increased demand for commodities, benefiting rural producers but also creating new opportunities for Australian miners. Most important of all, the first surge of breakneck growth in East Asia opened a vast new market for iron ore, coal and other mining products. Britain’s significance as a trading partner had declined markedly since the 1950s. By the end of the 1960s, Japan overtook it as Australia’s largest customer, while the United States was now the main provider of imports.

The mining bonanza contributed to the boom conditions experienced generally after 1950. The Federal Government played its part by using the full range of macroeconomic policies that were also increasingly familiar in similar western countries to secure stability and full employment. It encouraged high immigration, relaxing the entry criteria to allow in large numbers of southern Europeans, who added directly to the workforce, but also brought knowledge and experience. With state governments, the Commonwealth increased expenditure on education significantly, effectively entering the field for the first time after 1945. Access to secondary education was widened with the abandonment of fees in government schools and federal finance secured an enormous expansion of university places, especially after 1960. Some weaknesses remained. Enrolment rates after primary school were below those in many industrial countries and funding for technical education was poor. Despite this, the Australian population’s rising levels of education and skill continued to be important additional sources of growth. Finally, although government advisers expressed misgivings, industry policy remained determinedly interventionist. While state governments competed to attract manufacturing investment with tax and other incentives, by the 1960s protection had reached its highest level, with Australia playing virtually no part in the General Agreement on Tariffs and Trade (GATT), despite being an original signatory. The effects of rising tariffs since 1900 were evident in the considerable decline in Australia’s openness to trade (Table 6). Yet, as the post-war boom approached its end, the country still relied upon commodity exports and foreign investment to purchase the manufactures it was unable to produce itself. The impossibility of sustaining growth in this way was already becoming clear, even though the full implications would only be felt during the decades to come.

Table 6
Trade (Exports Plus Imports)
as a Share of GDP, Current Prices, %

1900/1 44.9
1928/29 36.9
1938/38 32.7
1964/65 33.3
1972/73 29.5

Calculated from Vamplew (1987), ANA 119-129.

Liberalization and Structural Change, 1973-2005

From the beginning of the 1970s, instability in the world economy and weakness at home ended Australia’s experience of the post-war boom. During the following decades, manufacturing’s share in output (table 7) and employment fell, while the long-term relative decline of commodity prices meant that natural resources could no longer be relied on to cover the cost of imports, let alone the long-standing deficits in payments for services, migrant remittances and interest on foreign debt. Until the early 1990s, Australia also suffered from persistent inflation and rising unemployment (which remained permanently higher, see chart 1). As a consequence, per capita incomes fluctuated during the 1970s, and the economy contracted in absolute terms during 1982-83 and 1990-91.

Even before the 1970s, new sources of growth and rising living standards had been needed, but the opportunities for economic change were restricted by the elaborate regulatory structure that had evolved since Federation. During that decade itself, policy and outlook were essentially defensive and backward looking, despite calls for reform and some willingness to alter the tariff. Governments sought to protect employment in established industries, while dependence on mineral exports actually increased as a result of the commodity booms at the decade’s beginning and end. By the 1980s, however, it was clear that the country’s existing institutions were failing and fundamental reform was required.

Table 7
The Australian Economy, 1974-2004

A. Percentage shares of value-added, constant prices

1974 1984 1994 2002
Agriculture 4.4 4.3 3.0 2.7
Manufacturing 18.1 15.2 13.3 11.8
Other industry, inc. mining 14.2 14.0 14.6 14.4
Services 63.4 66.4 69.1 71.1

B. Per capita GDP, annual average rate of growth %, constant prices

1973-84 1.2
1984-94 1.7
1994-2004 2.5

Calculated from World Bank, World Development Indicators (Sept. 2005).

Figure 1
Unemployment, 1971-2005, percent

Unemployment, 1971-2005, percent

Source: Reserve Bank of Australia (1988); Reserve Bank of Australia, G07Hist.xls. Survey data at August. The method of data collection changed in 1978.

The catalyst was the resumption of the relative fall of commodity prices since the Second World War which meant that the cost of purchasing manufactured goods inexorably rose for primary producers. The decline had been temporarily reversed by the oil shocks of the 1970s but, from the 1980/81 financial year until the decade’s end, the value of Australia’s merchandise imports exceeded that of merchandise exports in every year but two. The overall deficit on current account measured as a proportion of GDP also moved became permanently higher, averaging around 4.7 percent. During the 1930s, deflation had been followed by the further closing of the Australian economy. There was no longer much scope for this. Manufacturing had stagnated since the 1960s, suffering especially from the inflation of wage and other costs during the 1970s. It was particularly badly affected by the recession of 1982-83, when unemployment rose to almost ten percent, its highest level since the Great Depression. In 1983, a new federal Labor Government led by Bob Hawke sought to engineer a recovery through an ‘Accord’ with the trade union movement which aimed at creating employment by holding down real wages. But under Hawke and his Treasurer, Paul Keating — who warned colorfully that otherwise the country risked becoming a ‘banana republic’ — Labor also started to introduce broader reforms to increase the efficiency of Australian firms by improving their access to foreign finance and exposing them to greater competition. Costs would fall and exports of more profitable manufactures increase, reducing the economy’s dependence on commodities. During the 1980s and 1990s, the reforms deepened and widened, extending to state governments and continuing with the election of a conservative Liberal-National Party government under John Howard in 1996, as each act of deregulation invited further measures to consolidate them and increase their effectiveness. Key reforms included the floating of the Australian dollar and the deregulation of the financial system; the progressive removal of protection of most manufacturing and agriculture; the dismantling of the centralized system of wage-fixing; taxation reform; and the promotion of greater competition and better resource use through privatization and the restructuring of publicly-owned corporations, the elimination of government monopolies, and the deregulation of sectors like transport and telecommunications. In contrast with the 1930s, the prospects of further domestic reform were improved by an increasingly favorable international climate. Australia contributed by joining other nations in the Cairns Group to negotiate reductions of agricultural protection during the Uruguay round of GATT negotiations and by promoting regional liberalization through the Asia Pacific Economic Cooperation (APEC) forum.

Table 8
Exports and Openness, 1983-2004

Shares of total exports, % Shares of GDP: exports + imports, %
Goods Services
Rural Resource Manuf. Other
1983 30 34 9 3 24 26
1989 23 37 11 5 24 27
1999 20 34 17 4 24 37
2004 18 33 19 6 23 39

Calculated from: Reserve Bank of Australia, G10Hist.xls and H03Hist.xls; World Bank, World Development Indicators (Sept. 2005). Chain volume measures, except shares of GDP, 1983, which are at current prices.

The extent to which institutional reform had successfully brought about long-term structural change was still not clear at the end of the century. Recovery from the 1982-83 recession was based upon a strong revival of employment. By contrast, the uninterrupted growth experienced since 1992 arose from increases in the combined productivity of workers and capital. If this persisted, it was a historic change in the sources of growth from reliance on the accumulation of capital and the increase of the workforce to improvements in the efficiency of both. From the 1990s, the Australian economy also became more open (table 8). Manufactured goods increased their share of exports, while rural products continued to decline. Yet, although growth was more broadly-based, rapid and sustained (table 7), the country continued to experience large trade and current account deficits, which were augmented by the considerable increase of foreign debt after financial deregulation during the 1980s. Unemployment also failed to return to its pre-1974 level of around 2 percent, although much of the permanent rise occurred during the mid to late 1970s. In 2005, it remained 5 percent (Figure 1). Institutional reform clearly contributed to these changes in economic structure and performance but they were also influenced by other factors, including falling transport costs, the communications and information revolutions, the greater openness of the international economy, and the remarkable burst of economic growth during the century’s final decades in southeast and east Asia, above all China. Reform was also complemented by policies to provide the skills needed in a technologically-sophisticated, increasingly service-oriented economy. Retention rates in the last years of secondary education doubled during the 1980s, followed by a sharp increase of enrolments in technical colleges and universities. By 2002, total expenditure on education as a proportion of national income had caught up with the average of member countries of the OECD (Table 9). Shortages were nevertheless beginning to be experienced in the engineering and other skilled trades, raising questions about some priorities and the diminishing relative financial contribution of government to tertiary education.

Table 9
Tertiary Enrolments and Education Expenditure, 2002

Tertiary enrolments, gross percent Education expenditure as a proportion of GDP, percent
Australia 63.22 6.0
OECD 61.68 5.8
United States 70.67 7.2

Source: World Bank, World Development Indicators (Sept. 2005); OECD (2005). Gross enrolments are total enrolments, regardless of age, as a proportion of the population in the relevant official age group. OECD enrolments are for fifteen high-income members only.

Summing Up: The Australian Economy in a Wider Context

Virtually since the beginning of European occupation, the Australian economy had provided the original British colonizers, generations of migrants, and the descendants of both with a remarkably high standard of living. Towards the end of the nineteenth century, this was by all measures the highest in the world (see table 10). After 1900, national income per member of the population slipped behind that of several countries, but continued to compare favorably with most. In 2004, Australia was ranked behind only Norway and Sweden in the United Nation’s Human Development Index. Economic historians have differed over the sources of growth that made this possible. Butlin emphasized the significance of local factors like the unusually high rate of urbanization and the expansion of domestic manufacturing. In important respects, however, Australia was subject to the same forces as other European settler societies in New Zealand and Latin America, and its development bore striking similarities to theirs. From the 1820s, its economy grew as one frontier of an expanding western capitalism. With its close institutional ties to, and complementarities with, the most dynamic parts of the world economy, it drew capital and migrants from them, supplied them with commodities, and shared the benefits of their growth. Like other settler societies, it sought population growth as an end in itself and, from the turn of the nineteenth century, aspired to the creation of a national manufacturing base. Finally, when openness to the world economy appeared to threaten growth and living standards, governments intervened to regulate and protect with broader social objectives in mind. But there were also striking contrasts with other settler economies, notably those in Latin America like Argentina, with which it has been frequently compared. In particular, Australia responded to successive challenges to growth by finding new opportunities for wealth creation with a minimum of political disturbance, social conflict or economic instability, while sharing a rising national income as widely as possible.

Table 10
Per capita GDP in Australia, United States and Argentina
(1990 international dollars)

Australia United States Argentina
1870 3,641 2,457 1,311
1890 4,433 3,396 2,152
1950 7,493 9,561 4,987
1998 20,390 27,331 9,219

Sources: Australia: GDP, Haig (2001) as converted in Maddison (2003); all other data Maddison (1995) and (2001)

From the mid-twentieth century, Australia’s experience also resembled that of many advanced western countries. This included the post-war willingness to use macroeconomic policy to maintain growth and full employment; and, after the 1970s, the abandonment of much government intervention in private markets while at the same time retaining strong social services and seeking to improve education and training. Australia also experienced a similar relative decline of manufacturing, permanent rise of unemployment, and transition to a more service-based economy typical of high income countries. By the beginning of the new millennium, services accounted for over 70 percent of national income (table 7). Australia remained vulnerable as an exporter of commodities and importer of capital but its endowment of natural resources and the skills of its population were also creating opportunities. The country was again favorably positioned to take advantage of growth in the most dynamic parts of the world economy, particularly China. With the final abandonment of the White Australia policy during the 1970s, it had also started to integrate more closely with its region. This was further evidence of the capacity to change that allowed Australians to face the future with confidence.

References:

Anderson, Kym. “Australia in the International Economy.” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 33-49. Cambridge: Cambridge University Press, 2001.

Blainey, Geoffrey. The Rush that Never Ended: A History of Australian Mining, fourth edition. Melbourne: Melbourne University Press, 1993.

Borland, Jeff. “Unemployment.” In Reshaping Australia’s Economy: Growth and with Equity and Sustainable Development, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 207-228. Cambridge: Cambridge University Press, 2001.

Butlin, N. G. Australian Domestic Product, Investment and Foreign Borrowing 1861-1938/39. Cambridge: Cambridge University Press, 1962.

Butlin, N.G. Economics and the Dreamtime, A Hypothetical History. Cambridge: Cambridge University Press, 1993.

Butlin, N.G. Forming a Colonial Economy: Australia, 1810-1850. Cambridge: Cambridge University Press, 1994.

Butlin, N.G. Investment in Australian Economic Development, 1861-1900. Cambridge: Cambridge University Press, 1964.

Butlin, N. G., A. Barnard and J. J. Pincus. Government and Capitalism: Public and Private Choice in Twentieth Century Australia. Sydney: George Allen and Unwin, 1982.

Butlin, S. J. Foundations of the Australian Monetary System, 1788-1851. Sydney: Sydney University Press, 1968.

Chapman, Bruce, and Glenn Withers. “Human Capital Accumulation: Education and Immigration.” In Reshaping Australia’s economy: growth with equity and sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 242-267. Cambridge: Cambridge University Press, 2001.

Dowrick, Steve. “Productivity Boom: Miracle or Mirage?” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 19-32. Cambridge: Cambridge University Press, 2001.

Economist. “Has he got the ticker? A survey of Australia.” 7 May 2005.

Haig, B. D. “Australian Economic Growth and Structural Change in the 1950s: An International Comparison.” Australian Economic History Review 18, no. 1 (1978): 29-45.

Haig, B.D. “Manufacturing Output and Productivity 1910 to 1948/49.” Australian Economic History Review 15, no. 2 (1975): 136-61.

Haig, B.D. “New Estimates of Australian GDP: 1861-1948/49.” Australian Economic History Review 41, no. 1 (2001): 1-34.

Haig, B. D., and N. G. Cain. “Industrialization and Productivity: Australian Manufacturing in the 1920s and 1950s.” Explorations in Economic History 20, no. 2 (1983): 183-98.

Jackson, R. V. Australian Economic Development in the Nineteenth Century. Canberra: Australian National University Press, 1977.

Jackson, R.V. “The Colonial Economies: An Introduction.” Australian Economic History Review 38, no. 1 (1998): 1-15.

Kelly, Paul. The End of Certainty: The Story of the 1980s. Sydney: Allen and Unwin, 1992.

Macintyre, Stuart. A Concise History of Australia. Cambridge: Cambridge University Press, 1999.

McCarthy, J. W. “Australian Capital Cities in the Nineteenth Century.” In Urbanization in Australia; The Nineteenth Century, edited by J. W. McCarthy and C. B. Schedvin, 9-39. Sydney: Sydney University Press, 1974.

McLean, I.W. “Australian Economic Growth in Historical Perspective.” The Economic Record 80, no. 250 (2004): 330-45.

Maddison, Angus. Monitoring the World Economy 1820-1992. Paris: OECD, 1995.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

Meredith, David, and Barrie Dyster. Australia in the Global Economy: Continuity and Change. Cambridge: Cambridge University Press, 1999.

Nicholas, Stephen, editor. Convict Workers: Reinterpreting Australia’s Past. Cambridge: Cambridge University Press, 1988.

OECD. Education at a Glance 2005 – Tables OECD, 2005 [cited 9 February 2006]. Available from http://www.oecd.org/document/11/0,2340,en_2825_495609_35321099_1_1_1_1,00.html.

Pope, David, and Glenn Withers. “The Role of Human Capital in Australia’s Long-Term Economic Growth.” Paper presented to 24th Conference of Economists, Adelaide, 1995.

Reserve Bank of Australia. “Australian Economic Statistics: 1949-50 to 1986-7: I Tables.” Occasional Paper No. 8A (1988).

Reserve Bank of Australia. Current Account – Balance of Payments – H1 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/H01bhist.xls.

Reserve Bank of Australia. Gross Domestic Product – G10 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/G10hist.xls.

Reserve Bank of Australia. Unemployment – Labour Force – G1 [cited 2 February 2006]. Available from http://www.rba.gov.au/Statistics/Bulletin/G07hist.xls.

Schedvin, C. B. Australia and the Great Depression: A Study of Economic Development and Policy in the 120s and 1930s. Sydney: Sydney University Press, 1970.

Schedvin, C.B. “Midas and the Merino: A Perspective on Australian Economic History.” Economic History Review 32, no. 4 (1979): 542-56.

Sinclair, W. A. The Process of Economic Development in Australia. Melbourne: Longman Cheshire, 1976.

United Nations Development Programme. Human Development Index [cited 29 November 2005]. Available from http://hdr.undp.org/statistics/data/indicators.cfm?x=1&y=1&z=1.

Vamplew, Wray, ed. Australians: Historical Statistics. Edited by Alan D. Gilbert and K. S. Inglis, Australians: A Historical Library. Sydney: Fairfax, Syme and Weldon Associates, 1987.

White, Colin. Mastering Risk: Environment, Markets and Politics in Australian Economic History. Melbourne: Oxford University Press, 1992.

World Bank. World Development Indicators ESDS International, University of Manchester, September 2005 [cited 29 November 2005]. Available from http://www.esds.ac.uk/International/Introduction.asp.

Citation: Attard, Bernard. “The Economic History of Australia from 1788: An Introduction”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/the-economic-history-of-australia-from-1788-an-introduction/

Historical Anthropometrics

Timothy Cuff, Westminster College

Historical anthropometrics is the study of patterns in human body size and their correlates over time. While social researchers, public health specialists and physical anthropologists have long utilized anthropometric measures as indicators of well-being, only within the past three decades have historians begun to use such data extensively. Adult stature is a cumulative indicator of net nutritional status over the growth years, and thus reflects command over food and access to healthful surroundings. Since expenditures for these items comprised such a high percentage of family income for historical communities, mean stature can be used to examine changes in a population’s economic circumstances over time and to compare the well-being of different groups with similar genetic height potential. Anthropometric measures are available for portions of many national populations as far back as the early 1700s. While these data often serve as complements to standard economic indicators, in some cases they provide the only means of assessing historical economic well-being, as “conventional” measures such as per capita GDP, wage and price indices, and income inequality measures have been notoriously spotty and problematic to develop. Anthropometric-based research findings to date have contributed to the scholarly debates over mortality trends, the nature of slavery, and the outcomes of industrialization and economic development. Height has been the primary indicator utilized to date. Other indicators include height-standardized weight indices, birth weight, and age at menarche. Potentially even more important, historical anthropometrics broadens the understanding of “well-being” beyond the one dimensional “ruler” of income, providing another lens through which the quality of historical life can be viewed.

This article:

  • provides a brief background of the field including a history of human body measurement and analysis and a description of the biological foundations for historical anthropometrics,
  • describes the current state of the field (along with methodological issues) and future directions, and
  • provides a selective bibliography.

Anthropometrics: Historical and Bio-Medical Background

The Evolution of Body Measurement and Analysis in Context

The measurement and description of the human form in the West date back to the artists of classical civilizations, but the rationale for systematic, large-scale body measurement and record keeping emerged out of the needs of early modern military organizations. By the mid-eighteenth century height commonly provided a means of classifying men into and of identifying them within military units and the procedures for measuring individuals entering military service were well established. The military’s need to identify recruits has provided most historical measurements of young men.

Scientific curiosity in the eighteenth century also spurred development of the first textbooks on human growth, although they were more concerned with growth patterns throughout life than with stature differences across groups or over time. In the nineteenth century class differences in height were easily observable in England. The moral outrage generated by the “tiny children” (Charles Dickens’ “Oliver Twists”) along with the view that medicine had a preventive as well as a curative function, meant that anthropometry was directed primarily at the poor, especially children toiling in the factories of English and French industrial cities. Later, fear in Britain over the “degeneration” of its men and their potential as an effective fighting force provided motivation for large-scale anthropometric surveys, as did efforts evolving out of the child-welfare movement. The early-twentieth century saw the establishment of a series of longitudinal population surveys (which follow individuals as they age) in North America and in Europe. In some cases this work was directed toward the generation of growth standards, while other efforts evaluated social-class differences among children. Such studies can be seen as transitional steps between contemporary and historical anthropometrics. Since the 1950s, anthropometry has been utilized for a variety of purposes in both the developed and underdeveloped world. Population groups have been measured in order to refine growth standards, to monitor the nutritional status of individuals and populations during famines and political disturbances, and to evaluate the effectiveness of economic development programs.

Anthropometric studies today can be classified as one of three types. Auxologists perform basic research, collecting body measurements over the human life cycle to further detail standards of physical development for twenty-first century populations. The second focus, a continuation of nineteenth century work, documents the living standards of children often supporting regulatory legislation or government aid policies. The third direction is historical anthropometrics. Economists, historians, and anthropologists specializing in this field seek to assess, in physical terms, the well-being of previous societies and the factors which influenced it.

Human Growth and Development: The Biological Foundations of Historical Anthropometrics

While historical anthropometric research is a relatively recent development, an extensive body of medical literature relating nutrition and epidemiological conditions to physical growth provides a strong theoretical underpinning. Bio-medical literature, along with the World Health Organization, describes mean stature as one of the best measures of overall health conditions within a society.

Final attained height and height by age both result from a complex interaction of genetic endowment and environmental effects. At the level of the individual, genetics is a strong but not exclusive influence on the determination of final height and of growth patterns. Genetics is most important when net nutrition is optimal. However, when evaluating differences among groups of people in sub-optimal nutritional circumstances environmental influences predominate.

The same nutritional regime can result in different final stature for particular individuals, because of genetic variation in the ability to continue growing in the face of adverse nutritional circumstances, epidemiological environments, or work requirements. However, the genetic height potential of most Europeans, Africans, and North Americans of European or African ancestry is comparable; i.e., under equivalent environmental circumstances the groups have achieved nearly identical mean adult stature. For example, in many parts of rural Africa, mean adult heights today are similar to those of Africans of 150 years ago, while well-fed urban Africans attain final heights similar to current-day Europeans and North Americans of European descent. Differences in nutritional status do result in wide variation in adult height even within populations of the same genetic make-up. For example, individuals from higher socio-economic classes tend to be taller than their lower class counterparts whether in impoverished third-world countries or in the developed nations.

Height is the most commonly utilized, but not the only, anthropometric indicator of nutritional status. The growth profile is another. Environmental conditions, while affecting the timing of growth (the ages at which accelerations and decelerations in growth rates occur), do not affect the overall pattern (the sequence in which growth/maturation events occur). The body seems to be self-stabilizing, postponing growth until caloric levels will support it and maintaining genetically programmed body proportions more rigidly than size potential. While final adult height and length of the growth period are not absolutely linked, populations which stop growing earlier usually, although not universally, end up being taller. Age at menarche, birth weight, and weight-for-height are also useful. Age at menarche (i.e. the first occurrence of menstruation) is not a measure of physical size, but of sexual maturation. Menarche generally occurs earlier among well-nourished women. Average menarcheal age in the developed West is about 13 years, while in the middle of the nineteenth century it was between 15 and 16 years among European women. Areas which have not experienced nutritional improvement over the past century have not witnessed decreases in the age at menarche. Infant birth weight, an indicator of long-term maternal nutritional status, is influenced by the mother’s diet, work intensity, quality of health care, maternal size and the number of children she has delivered, as well as the mother’s health practices. The level of economic inequality and social class status are also correlated with birth weight variation, although these variables reflect some of the factors noted above. However, because the mother’s diet and health status are such strong influences on birth weight, it provides another useful means of monitoring women’s well-being. Height-for-weight indices, particularly the body mass index (BMI), have seen some use by anthropometric historians. Contemporary bio-medical research which links BMI levels and mortality risk hints at the promise which this measure might hold for historians. However, the limited availability of weight measurements before the mid-nineteenth century will limit the studies which can be produced.

Improvements in net nutritional status, both across wide segments of the population in developed countries and within urban areas of less-developed countries (LDCs), are generally accepted as the most salient influence on growth patterns and final stature. The widely experienced improvement in net nutrition which was apparent in most of the developed world across most of the twentieth century and more recently in the “modern” sector of some LDCs has lead to a secular trend, the unidirectional trend toward greater stature and faster maturation. Before the twentieth century, height cycling without a distinct direction was the dominant historical pattern. (Two other sources of stature increase have been hypothesized but have garnered little support among the medical community: the increased practice of infantile smallpox vaccination and heterosis (hybrid vigor), i.e. varietal cross-breeding within a species which produces offspring who are larger or stronger than either parent.)

The Definition and Determination of Nutritional Status

“Nutritional status” is a term critical to an understanding of anthropometrics. It encompasses more than simply diet, i.e. the intake of calories and nutrients, and is thus distinct from the more common term “nutrition.” While nutrition refers to the quantity and quality of food inputs to the human biological system, it makes no reference to the amounts needed for healthy functioning resulting from nutrient demand placed on the individual. Nutritional status, or synonymously “net nutrition,” refers to the summing up of nutrient input and demand on those nutrients. While work intensity is the most obvious demand, it is just one of many. Energy is required to resist infection. Pregnancy adds caloric and nutrient demands, as does breast-feeding. Calories expended in any of these fashions are available neither for basal metabolism, nor for growth. The difference between nutrition and nutritional status/net nutrition is important for anthropometrics, because it is the latter, not the former, for which auxological measurements are a proxy.

Human biologists and medical scientists generally agree that within genetically similar populations net nutrition is the primary determinant of adult physical stature. Height, as Bielicki notes, is “socially induced variation.” Figure 1 indicates the numerous channels of influence on the final adult stature of any individual. Anthropometric indicators reflect the relative ease or difficulty of acquiring sufficient nutrients to provide for growth in excess of the immediate needs of the body. Nutritional status and physical stature clearly are composite measures of well-being linked to economic processes. However, the link is mediated through a variety of social circumstances, some volitional, others not. Hence, anthropometric historians must evaluate each situation within its own economic, cultural, and historical context.

In earlier societies, and in some less developed countries today, access to nutrients was determined primarily by control of arable land. As markets for food developed and urban living became predominant, for increasing percentages of the population, access to nutrients depended upon the ability to purchase food, i.e. on real income. Additionally, food allocation within the family is not determined by markets but by intra-household bargaining as well as by tastes and custom. For example, in some cultures households distribute food resources so as to ensure nutritional adequacy for those family members engaged in income or resource-generating activity in order to maximize earning power. The handful of studies which include historical anthropometric data for women reveal that stature trends by gender do not always move in concert. Rather, in periods of declining nutritional status, women often exhibited a reduction in stature levels before such changes appeared among males. This is somewhat paradoxical because biologists generally argue that women’s growth trajectories are more resistant to a diminution in nutritional status than are those of men. Though too little historical research has been done on this issue to speak with certainty, the pattern might imply that, in periods of nutritional stress, women bore the initial brunt of deprivation.

Other cultural practices, including the high status accorded to the use of certain foods, such as white flour, polished rice, tea or coffee may promote greater consumption of nutritionally less valuable foods among those able to afford them. This would tend to reduce the resultant stature differences by income. Access to nutrients also depends upon other individual choices. A small landholder might decide to market much of his farm’s high-value, high-protein meat and dairy products, reducing his family’s consumption of these nutritious food products in order to maximize money income. However, while material welfare would increase, biological welfare, knowingly or unknowingly, would decline.

Disease-exposure variation occurs as a result of some factors under the individual’s control and other factors which are determined at the societal level. Pathogen prevalence and potency and the level of community sanitation are critical factors which are not directly affected by individual decision making. However, housing and occupation are often individually chosen and do help to determine the extent of disease exposure. Once transportation improvements allow housing segregation based on socio-economic status to occur within large urban areas, residence location can become an important influence. However, prior to such, for example in mid-nineteenth century United States, urban childhood mortality levels were more influenced by the number of children in a family than by parental occupation or socio-economic status. The close proximity of the homes of the wealthy and the poor seems to have created a common level of exposure to infectious agents and equally poor sanitary conditions for children of all economic classes.

Work intensity, another factor determining nutritional status, is a function of the age at which youth enter the labor force, educational attainment, the physical exertion needed in a chosen occupation, and the level of technology. There are obvious feedback effects from current nutritional status to future nutritional status. A low level of nutritional status today might hinder full-time labor-force participation, and result in low incomes, poor housing, and substandard food consumption in subsequent periods as well, thereby reinforcing the cycle of nutritional inadequacy.

Historical Anthropometrics

Early Developments in the Field

Le Roy Ladurie’s studies of nineteenth-century French soldiers published in the late 1960s and early 1970s are recognized as the first works in the spirit of modern historical anthropometrics. He documented that stature among French recruits varied with their socio-economic characteristics. In the U.S., the research was carried forward in the late 1970s, much based on nineteenth-century records of U.S. slaves transported from the upper to the lower South. Studies of Caribbean slaves followed.

In the 1980s numerous anthropometric works were generated in connection with a National Bureau of Economic Research (NBER) directed study of American and European mortality trends from 1650 to the present, coordinated by Robert W. Fogel. Motivated in great part by the desire to evaluate Thomas McKeown’s hypothesis that improvements in nutrition were the critical component in mortality declines in the seventeenth through the nineteenth centuries, the project has lead to the creation of numerous large anthropometric data bases. These have been the starting point for the analysis of trends in physical stature and net nutritional status on both sides of the Atlantic. While most historical anthropometric studies published in the U.S. during the early and mid-1980s were either outgrowths of the NBER project or were conducted by students of Robert Fogel, such as Richard Steckel and John Komlos, mortality trends were no longer the sole focus of historical anthropometrics. Anthropometric statistics were used to analyze the effect of industrialization on the populations experiencing it, as well as the characteristics of slavery in the United States. The data sources were primarily military records or documents relating to slaves. As the 1980s became the 1990s the geographic range of stature studies moved beyond Europe and North American to include Asia, Australia, and Africa. Other data sources were utilized. These included records from schools and utopian communities, certificates of freedom for manumitted slaves, voter registration cards, newspaper advertisements for runaway slaves and indentured servants, insurance applications, and a variety of prison inmate records. The number of anthropometric historians also expanded considerably.

Findings to Date

Major achievements to date in historical anthropometrics include 1) the determination of the main outlines of the trend in physical stature in Europe and North America between the eighteenth and twentieth centuries, and 2) the emergence of several well-supported, although still debated, hypotheses pertaining to the relationship between height and the economic and social developments which accompanied modern economic growth in these centuries.

Historical research on human height has indicated how much healthier the New World environment was compared to that of Europe. Europeans who immigrated to North America, on average, obtained a net nutritional status far better than that which was possible for them to attain in their place of birth. Eighteenth century North Americans attained mean heights not achieved by Europeans until the twentieth century. The combination of lower population density, lower levels of income inequality, and greater food resources bestowed a great benefit upon those growing up in North America. This advantage is evident not only in adult heights but also in the earlier timing of the adolescent growth spurt, as well as the earlier attainment of final height.

Table 1
Mean Heights of Adult Males (in inches)

Table 1
Mean Heights of Adult Males (in inches)–>

North America Europe
European Ancestry African Ancestry Hungary England Sweden
1775 – 1783 1861 – 1865 1943 – 1944 1811 – 1861 1943 – 1944 1813 – 1835 1816 – 1821 1843 – 1886
68.1 68.5 68.1 67.0 67.9 64.2 65.8 66.3

Sources: U.S. whites, 1775-1783: Kenneth L. Sokoloff and Georgia C. Villaflor, “The Early Achievement of Modern Stature in America,” Social Science History 6 (1982): 453-481. U.S. whites, 1861-65: Robert Margo and Richard Steckel, “Heights of Native-Born Whites during the Antebellum Period,” Journal of Economic History 43 (1983): 167-174. U.S. whites and blacks, 1943-44: Bernard D. Karpinos, “Height and Weight of Selective Service Registrants Processed for Military Service during World War II,” Human Biology 40 (1958): 292-321, Table 5. U.S. blacks, 1811-1861: Robert Margo and Richard Steckel, “The Height of American Slaves: New Evidence on Slave Nutrition and Health,” Social Science History 6 (1982): 516-538, Table 1. Hungary: John Komlos. Nutrition and Economic Development in the Eighteenth Century Habsburg Monarchy, Princeton: Princeton University Press, 1989, Table 2.1, 57. Britain: Roderick Floud, Kenneth Wachter, and Annabel Gregory, Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980, Cambridge: Cambridge University Press, 1990, Table 4.1, 148. Sweden: Lars G. Sandberg and Richard Steckel, “Overpopulation and Malnutrition Rediscovered: Hard Times in 19th-Century Sweden,” Explorations in Economic History 25 (1988): 1-19, Table 2, 7.

Note: Dates refer to dates of measurement.

Stature Cycles in Europe and America

The early finding that there was not a unidirectional upward trend in stature since the 1700s startled researchers, whose expectations were based on recent experience. Extrapolating backward, Floud, Wachter, and Gregory note that such surprise was misplaced, for if the twentieth century’s rate of height increase had been occurring for several centuries, medieval Europeans would have been dwarfs or midgets. Instead, in Europe cycles in height were evident. Though smaller in amplitude than in Europe, stature cycling was a feature of the American experience, as well. At the time of the American Revolution, the Civil War, and World War II, the mean height of adult, native-born white males was a fraction over 68 inches (Table 1), but there was some variation in between these periods with a small decline in the years before the Civil War and perhaps another one from 1860 into the 1880s. Just before the turn of the twentieth century, mean stature began its relatively uninterrupted increase which continues to the present day. These findings are based primarily on military records drawn from the early national army, Civil War forces, West Point Cadets, and the Ohio National Guard, although other data sets show similar trends. The free black population seems to have experienced a downturn in physical stature very similar to that of whites in the pre-Civil War period. However, an exception to the antebellum diminution in nutritional status has been found among slave men.

Per Capita Income and Height

In addition to the cycling in height, anthropometric historians have documented that the intuitively anticipated positive correlation between mean height and per capita income holds at the national level in the twentieth century. Steckel has shown that, in cross-national comparison, the correlation between height and per capita income is as high as .84 to .90. However, since per capita income is highly correlated with a series of other variables that also affect height, the exact pathway through which income affects height is not fully clear. Among the factors which help to explain the variation are better diet, medicine, improvements in sanitary infrastructure, longer schooling, more sedentary life, and better housing. Intense work regimes and psycho-social stress, both of which affect growth negatively, might also be mitigated by greater per capita income. However, prior to the twentieth century the relationship between height and income was not monotonic. U.S. troops during the American Revolution were nearly as tall as U.S. soldiers sent to Europe and Japan in the 1940s, despite the fact that per capita income in the earlier period was substantially below that in the latter. Similarly, while per capita income in the U.S. in the late 1770s was below that of the British, the American troops had a height advantage of several inches over their British counterparts in the War of Independence.

Height and Income Inequality

The level of income inequality also has a powerful influence on mean heights. Steckel’s analysis of data for the twentieth century indicates that a 0.1 decrease in the Gini coefficient (indicating greater income equality) is associated with a gain in mean stature of about 3.7 cm (1.5 inches). In societies with great inequality, increases in per capita income have little effect on average stature if the gains accrue primarily to the wealthier segments of the society. Conversely, even without changes in average national per capita income, a reduction in inequality can have similar positive impact upon the stature and health of those at the lower rungs of the income ladder.

The high level of social inequality at the onset of modern economic growth in England is exemplified by the substantial disparity between the height of students of the Sandhurst Royal Military Academy, an elite institution, and the Marine Society, a home for destitute boys in London. The difference in mean height at age fourteen exceeded three inches in favor of the gentry. In some years the gap was even greater. Komlos has documented similar findings elsewhere: regardless of location, boys from “prestigious military schools in England, France, and Germany were much taller than the population at large.” A similar pattern existed in the nineteenth-century U.S. However, the social gap in the U.S. was miniscule compared to that prevailing in the Old World. Stature also varied by occupational groups. In eighteenth and nineteenth century Europe and North America, white collar and professional workers tended to be significantly taller than laborers and unskilled workers. However, farmers, being close to the source of nutrients and with fewer interactions with urban disease pools, tended to be the tallest, though their advantage disappeared by the twentieth century.

Regional and Rural-Urban Differences

Floud, Wachter, and Gregory have shown that, in early nineteenth century Britain, regional variation in stature dwarfed occupational differences. In 1815, Scotsmen, rural and urban, as well as the Irish, were about one-half an inch taller than the non-London urban English of the day. The rural English were slightly shorter, on average, than Englishmen born in small and medium sized towns. Londoners, however, had a mean height almost one-third of an inch less than other urban dwellers in England and more than three-quarters of an inch below the Irish or the Scots. A similar pattern held among convicts transported to New South Wales, Australia, except that the stature of the rural English was well above the average for all other English transported convicts. Floud, Wachter, and Gregory show a trend of convergence in height among these groups after 1800. The tendency for low population density rural areas in the nineteenth century to be home to the tallest individuals was apparent from the Habsburg Monarchy to Scotland, and in the remote northern regions of late nineteenth-century Sweden and Japan as well. In colonial America the rural-urban gradient did not exist. As cities grew, the rural born began to display a stature advantage over their urban brethren. This divergence persisted into the nineteenth century, and disappeared in the early twentieth century, when the urban-born gained a height advantage.

The Early-Industrial-Growth and Antebellum Puzzles

These patterns of stature variation have been put into a framework in both the European and the American contexts. Respectively they are known as the “early-industrial-growth puzzle” and the “Antebellum puzzle.” The commonality which has been identified is that in the early stages of industrialization and/or market integration, even with rising per capita incomes, the biological well-being of the populations undergoing such change does not, necessarily, improve immediately. Rather, for at least some portions of the population, biological well-being declined during this period of economic growth. Explanations for these paradoxes (or puzzles) are still being investigated and include: rising income inequality, the greater spread of disease through more thoroughly developed transportation and marketing systems and urban growth, the rising real price of food as population growth outstripped the agricultural system’s ability to provide, and the choice of farmers to market rather than consume high value/high protein crops.

Slave Heights

Research on slave heights has provided important insight into the living standards of these bound laborers. Large differences in stature have been documented between slaves on the North American mainland and those in the Caribbean. Adult mainland slaves, both women and men, were approximately two inches taller than those in the West Indies throughout the eighteenth and nineteenth centuries. Steckel argues that the growth pattern and infant mortality rates of U.S. slave children indicate that they were moderately to severely malnourished, with mean heights for four to nine year olds below the second percentile of modern growth standards and with mortality rates twice those estimated for the entire United States population. Although below the fifth percentile throughout childhood, as adults these slaves were relatively tall by nineteenth-century standards, reaching about the twenty-fifth percentile of today’s height distribution, taller than most European populations of the time.

Height’s Correlation with Other Biological Indicators

The evaluation of McKeown’s hypothesis that much of the modern decline in mortality rates could be traced to improvements in nutrition (food intake) was one of the early rationales for the modern study of historical stature. Subsequent work has presented evidence for the parallel cycling of height and life expectancy in the United States during the nineteenth century. The relationship between the body-mass index, morbidity, and mortality risk within historical populations has also been documented. Along a similar line, Sandberg and Steckel’s data on Sweden have pointed out the parallel nature of stature trends and childhood mortality rates in the mid-nineteenth century.

Economic and social history are not the only two fields which have felt historical anthropometrics’ impact. North American slave height-by-age profiles developed by Steckel have been used by auxologists to exemplify the range of possible growth patterns among humans. Based on findings within the biological sciences, historical studies of stature have come full circle and are providing those same sciences with new data on human physical potential.

Methodological Issues

Accuracy problems in military-based data sets arise predominantly from carelessness of the measurer or from intentional misreporting of data rather than from lack of orthodox practice. Inadequate concern for accuracy can most often be noticed in heaping (height observations rounded to whole feet, six inch increments, or even numbered inches) and lack of fractional measurements. These “rounding” errors tend to be self-canceling. Of greater concern is intentional misreporting of either height or age, because minimum stature and age restrictions were often applied to military recruits. Young men, eager to discover the “romance” of military life or receive the bounty which sometimes accompanied enlistment, were not impervious to slight fabrication of their age. Recruiting officers, hoping to meet their assigned quotas quickly, might have been tempted to round measurements up to the minimum height requirement. Hence, it is not uncommon to find height and age heaping at either the age or stature minima.

For anthropometric historians, the issue of the representativeness of the population under study is similar to that for any social historian, but several specific caveats are appropriate when considering military samples. In time of peace military recruits tend to be less representative of the general population than are wartime armies. The military, with fewer demands for personnel, can be more selective, often instituting more stringent height minima, and occasionally maxima, for recruits. Such policies, as well as the self-interested behaviors noted above, require those who would use military data sets to evaluate and potentially adjust the data to account for the observations missing due to either left or right tail truncation. A series of techniques to account for such difficulties in the data have been developed, although there is still debate over the most appropriate technique. Other data sets also exhibit selectivity biases, although of different natures. Prison registers clearly do not provide a random sample of the population. The filter, however, is not based on size or desire for “exciting” work – rather on the propensity for criminal activity and on the enforcement mechanism of the judicial system. The representativeness of anthropometric samples can also be affected by previous selection by the Grim Reaper. Within Afro-Caribbean slave populations in Trinidad, death rates were significantly higher for shorter individuals (at all ages) than for the taller ones. The result is that a select group of more robust and taller individuals remained alive for eventual measurement.

One difficulty faced by anthropometric historians is the association of this research, more imagined than real, with previous misuses of body measurement. Nineteenth century American phrenologists used skull shape and size as a means of determining intelligence and as a way of justifying the enslavement of African-Americans. The Bertillon approach to evaluating prison inmates included the measurement and classification of lips, ears, feet, nose, and limbs in an effort to discern a genetic or racial basis for criminality. The Nazis attempted to breed the perfect race by eliminating what they perceived to be physically “inferior” peoples. Each, appropriately, has made many squeamish in regard to the use of body measurements as an index of social development. Further, while the biological research which supports historical anthropometrics is scientifically well founded and fully justifies the approach, care must be exercised to ensure that the impression is not given that researchers either are searching for, or promoting, an “aristocracy of the tall.” Being tall is not necessarily better in all circumstances, although recent work does indicate a series of social and economic advantages do accrue to the tall. However, for populations enduring an on-going sub-optimal net nutritional regime, an increase in mean height does signify improvement in the net nutritional level, and thus the general level of physical well-being. Untangling the factors responsible for change in this social indicator is complicated and height is not a complete proxy for the quality of life. However, it does provide a valuable means of assessing biological well-being in the past and the influence of social and economic developments on health.

Future Directions

Historical anthropometrics is maturing. Over the past several years a series of state-of-the-field articles and anthologies of critical works have been written or compiled. Each summarizes past accomplishments, consolidates isolated findings into more generalized conclusions, and/or points out the next steps for researchers. In 2004, the editors of Social Science History devoted an entire volume to anthropometric history, drawing upon both current work and remembrances of many of the field’s early and prominent researchers, including an integrative essay by Komlos and Baten. Anthropometric history now has its own journal, as John Komlos, who has literally established a center for historical anthropometrics in Munich, created Economics and Biology, “devoted to the exploration of the effect of socio-economic processes on human beings as biological organisms.” Early issues highlight the wide geographic, temporal, and conceptual range of historical anthropometric studies. Another project which shows the great range of current effort is Richard Steckel’s work with anthropologists to characterize very long term patterns in the movement of mean human height. Already this collaboration has produced, The Backbone of History: Health and Nutrition in the Western Hemisphere, a compilation of essays documenting the biological well-being of New World populations beginning in 5000 B.C. using anthropological evidence. Its findings, consistent with those of some other recent anthropological studies, indicate a decline in health status for members of Western Hemisphere cultures in the pre-Columbian period as these societies began the transition from economies based on hunting and gathering to ones relying more heavily on settled agriculture. Steckel has been working to expand this approach to Europe via a collaborative and interdisciplinary project funded in part by the U.S. National Science Foundation, titled, “A History of Health in Europe from the Late Paleolithic Era to the Present.”

Yet even with these impressive steps, continued work, similar to early efforts in the field, is still needed. Expansion of the number and type of samples are important steps in the confirmation and consolidation of early results. One of the field’s on-going frustrations is that, except for slave records, few data sets contain physical measurements for large numbers of females. To date, female slaves and ex-slaves, some late nineteenth century U.S. college women, along with transported female convicts are the primary sources of female historical stature. Generalizations of research findings to entire populations are hindered by the small amount of data on females and the knowledge, from that data which are extant, that stature trends for the two sexes do not mimic each other. Similarly, upper class samples of either sex are not common. Future efforts should be directed at locating samples which contain data on these two understudied groups.

As Riley noted, the problem which anthropometric historians seek to resolve is not the identification of likely influences on stature. The biological sciences have provided that theoretical framework. The task at hand is to determine the relative weight of the various influences or, in Fogel’s terms, to perform “an accounting exercise of particularly complicated nature, which involves measuring not only the direct effect of particular factors but also their indirect effects and their interactions with other factors.”

More localized studies, with sample sizes adequate statistical analysis, are needed. These will allow the determination of the social, economic, and demographic factors most closely associated with human height variation. Other key areas of future investigation include the functional consequences of differences in biological well-being proxied by height, including differences in labor productivity and life expectancy. Even with the strides that have been made, in some corners, skepticism remains about the approach. To combat this, researchers must be careful to stress repeatedly what anthropometric indicators proxy, what their limits are, and how knowledge of anthropometric trends can appropriately influence our understanding of economic and social history as well as inform social policy. The field promises many future insights into the nature of and influences on historical human well-being and thus clues about how human well-being, the focus of economics generally, can be more fully and more widely advanced.

Selected Bibliography

Survey/Overview Publications

Engerman, Stanley. “The Standard of Living Debate in International Perspective: Measures and Indicators.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud, 17-46. Chicago: University of Chicago Press, 1997.

Floud, Roderick, and Bernard Harris. “Health, Height, and Welfare: Britain 1700-1980.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud, 91-126. Chicago: University of Chicago Press, 1997.

Floud, Roderick, Kenneth Wachter, and Annabelle Gregory. “The Heights of Europeans since 1750: A New Source for European Economic History.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 10-24. Chicago: University of Chicago Press, 1994.

Floud, Roderick, Kenneth Wachter, and Annabelle Gregory. Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980. Cambridge: Cambridge University Press, 1990.

Fogel, Robert W. “Nutrition and the Decline in Mortality since 1700: Some Preliminary Findings.” In Long-Term Factors in American Economic Growth, edited by Stanley Engerman and Robert Gallman, 439-527. Chicago: University of Chicago Press, 1987.

Haines, Michael R. “Growing Incomes, Shrinking People – Can Economic Development Be Hazardous to Your Health? Historical Evidence for the United States, England, and the Netherlands in the Nineteenth Century.” Social Science History 28 (2004): 249-70.

Haines, Michael R., Lee A. Craig, and Thomas Weiss. “The Short and the Dead: Nutrition, Mortality, and the ‘Antebellum Puzzle’ in the United States.” Journal of Economic History 63 (June 2003): 382-413.

Harris, Bernard. “Health, Height, History: An Overview of Recent Developments in Anthropometric History.” Social History of Medicine 7 (1994): 297-320.

Harris, Bernard. “The Height of Schoolchildren in Britain, 1900-1950.” In Stature, Living Standards and Economic Development: Essays in Anthropometric History, edited by John Komlos, 25-38. Chicago: University of Chicago Press, 1998.

Komlos, John, and Jörg Baten. The Biological Standard of Living in Comparative Perspectives: Proceedings of a Conference Held in Munich, January 18-23, 1997. Stuttgart: Franz Steiner Verlag, 1999.

Komlos, John, and Jörg Baten. “Looking Backward and Looking Forward: Anthropometric Research and the Development of Social Science History.” Social Science History 28 (2004): 191-210.

Komlos, John, and Timothy Cuff. Classics of Anthropometric History: A Selected Anthology, St. Katharinen, Germany: Scripta Mercaturae, 1998.

Komlos, John. “Anthropometric History: What Is It?” Magazine of History (Spring 1992): 3-5.

Komlos, John. Stature, Living Standards, and Economic Development: Essays in Anthropometric History. Chicago: University of Chicago Press, 1994.

Komlos, John. The Biological Standard of Living in Europe and America 1700-1900: Studies in Anthropometric History. Aldershot: Variorum Press, 1995.

Komlos, John. The Biological Standard of Living on Three Continents: Further Essays in Anthropometric History. Boulder: Westview Press, 1995.

Steckel, Richard H., and J.C. Rose. The Backbone of History: Health and Nutrition in the Western Hemisphere. New York: Cambridge University Press, 2002.

Steckel, Richard H., and Roderick Floud. Health and Welfare during Industrialization. Chicago: University of Chicago Press, 1997.

Steckel, Richard. “Height, Living Standards, and History.” Historical Methods 24 (1991): 183-87.

Steckel, Richard. “Stature and Living Standards in the United States.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John J. Wallis, 265-310. Chicago: University of Chicago Press, 1992.

Steckel, Richard. “Stature and the Standard of Living.” Journal of Economic Literature 33 (1995): 1903-40.

Steckel, Richard. “A History of the Standard of Living in the United States.” In EH.Net Encyclopedia, edited by Robert Whaples, http://www.eh.net/encyclopedia/contents/steckel.standard.living.us.php

Seminal Articles in Historical Anthropometrics

Aron, Jean-Paul, Paul Dumont, and Emmanuel Le Roy Ladurie. Anthropologie du Conscrit Francais. Paris: Mouton, 1972.

Eltis, David. “Nutritional Trends in Africa and the Americas: Heights of Africans, 1819-1839.” Journal of Interdisciplinary History 12 (1982): 453-75.

Engerman, Stanley. “The Height of U.S. Slaves.” Local Population Studies 16 (1976): 45-50.

Floud, Roderick and Kenneth Wachter. “Poverty and Physical Stature, Evidence on the Standard of Living of London Boys 1770-1870.” Social Science History 6 (1982): 422-52.

Fogel, Robert W. “Physical Growth as a Measure of the Economic Well-being of Populations: The Eighteenth and Nineteenth Centuries.” In Human Growth: A Comprehensive Treatise, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 263-281. New York: Plenum, 1986.

Fogel, Robert W., Stanley Engerman, Roderick Floud, Gerald Friedman, Robert Margo, Kenneth Sokoloff, Richard Steckel, James Trussell, Georgia Villaflor and Kenneth Wachter. “Secular Changes in American and British Stature and Nutrition.” Journal of Interdisciplinary History 14 (1983): 445-81.

Fogel, Robert W., Stanley L. Engerman, and James Trussell. “Exploring the Uses of Data on Height: The Analysis of Long-Term Trends in Nutrition, Labor Welfare, and Labor Productivity.” Social Science History 6 (1982): 401-21.

Friedman, Gerald C. “The Heights of Slaves in Trinidad.” Social Science History 6 (1982): 482-515.

Higman, Barry W. “Growth in Afro-Caribbean Slave Populations.” American Journal of Physical Anthropology 50 (1979): 373-85.

Komlos, John. “The Height and Weight of West Point Cadets: Dietary Change in Antebellum America.” Journal of Economic History 47 (1987): 897-927.

Le Roy Ladurie, Emmanuel, N. Bernageau, and Y. Pasquet. “Le Conscrit et l’ordinateur: Perspectives de recherches sur les Archives Militaries du XIXieme siecle Francais.” Studi Storici 10 (1969): 260-308.

Le Roy Ladurie, Emmanuel. “The Conscripts of 1868: A Study of the Correlation between Geographical Mobility, Delinquency and Physical Stature and Other Aspects of the Situation of the Young Frenchmen Called to Do Military Service That Year.” In The Territory of the Historian. Translated by Ben and Sian Reynolds. Chicago: University of Chicago Press, 1979.

Margo, Robert and Richard Steckel. “Heights of Native Born Whites during the Antebellum Period.” Journal of Economic History 43 (1983): 167-74.

Margo, Robert and Richard Steckel. “The Height of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-38.

Steckel, Richard. “Height and per Capita Income.” Historical Methods 16 (1983): 1-7.

Steckel, Richard. “Slave Height Profiles from Coastwise Manifests.” Explorations in Economic History 16 (1979): 363-80.

Articles Addressing Methodological Issues

Heintel, Markus, Lars Sandberg and Richard Steckel. “Swedish Historical Heights Revisited: New Estimation Techniques and Results.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 449-58. Stuttgart: Franz Steiner, 1998.

Komlos, John, and Joo Han Kim. “Estimating Trends in Historical Heights.” Historical Methods 23 (1900): 116-20.

Riley, James C. “Height, Nutrition, and Mortality Risk Reconsidered.” Journal of Interdisciplinary History 24 (1994): 465-92.

Steckel, Richard. “Percentiles of Modern Height: Standards for Use in Historical Research.’ Historical Methods 29 (1996): 157-66.

Wachter, Kenneth, and James Trussell. “Estimating Historical Heights.” Journal of the American Statistical Association 77 (1982): 279-303.

Wachter, Kenneth. “Graphical Estimation of Military Heights.” Historical Methods 14 (1981): 31-42.

Publications Providing Bio-Medical Background for Historical Anthropometrics

Bielecki, T. “Physical Growth as a Measure of the Economic Well-being of Populations: The Twentieth Century.” In Human Growth, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 283-305. New York: Plenum, 1986.

Bogin, Barry. Patterns of Human Growth. Cambridge: Cambridge University Press, 1988.

Eveleth, Phyllis B. “Population Differences in Growth: Environmental and Genetic Factors.” In Human Growth: A Comprehensive Treatise, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 221-39. New York: Plenum, 1986.

Eveleth, Phyllis B. and James M. Tanner. Worldwide Variation in Human Growth. Cambridge: Cambridge University Press, 1976.

Tanner, James M. “Growth as a Target-Seeking Function: Catch-up and Catch-down Growth in Man.” In Human Growth: A Comprehensive Treatise, second edition, volume 1, edited by F. Falkner and J.M. Tanner, 167-80. New York: Plenum, 1986.

Tanner, James M. “The Potential of Auxological Data for Monitoring Economic and Social Well-Being.” Social Science History 6 (1982): 571-81.

Tanner, James M. A History of the Study of Human Growth. Cambridge: Cambridge University Press, 1981.

World Health Organization. “Use and Interpretation of Anthropometric Indicators of Nutritional Status.” Bulletin of the World Health Organization 64 (1986): 929-41.

Predecessors to Historical Anthropometrics

Bowles, G. T. New Types of Old Americans at Harvard and at Eastern Women’s Colleges. Cambridge, MA: Harvard University Press, 1952.

Damon, Albert. “Secular Trend in Height and Weight within Old American Families at Harvard, 1870-1965.” American Journal of Physical Anthropology 29 (1968): 45-50.

Damon, Albert. “Stature Increase among Italian-Americans: Environmental, Genetic, or Both?” American Journal of Physical Anthropology 23 (1965) 401-08.

Gould, Benjamin A. Investigations in the Military and Anthropological Statistics of American Soldiers. New York: Hurd and Houghton [for the U.S. Sanitary Commission], 1869.

Karpinos, Bernard D. “Height and Weight of Selective Service Registrants Processed for Military Service during World War II.” Human Biology 40 (1958): 292-321.

Publications Focused on Nonstature-Based Anthropometric Measures

Brudevoll, J.E., K. Liestol, and L. Walloe. “Menarcheal Age in Oslo during the Last 140 Years.” Annals of Human Biology 6 (1979): 407-16.

Cuff, Timothy. “The Body Mass Index Values of Nineteenth Century West Point Cadets: A Theoretical Application of Waaler’s Curves to a Historical Population.” Historical Methods 26 (1993): 171-83.

Komlos, John. “The Age at Menarche in Vienna.” Historical Methods 22 (1989): 158-63.

James M. Tanner. “Trend towards Earlier Menarche in London, Oslo, Copenhagen, the Netherlands, and Hungary.” Nature 243 (1973): 95-96.

Trussell, James, and Richard Steckel. “The Age of Slaves at Menarche and Their First Birth.” Journal of Interdisciplinary History 8 (1978): 477-505.

Waaler, Hans Th. “Height, Weight, and Mortality: The Norwegian Experience.” Acta Medica Scandinavica, supplement 679, 1984.

Ward, W. Peter, and Patricia C. Ward. “Infant Birth Weight and Nutrition in Industrializing Montreal.” American Historical Review 89 (1984): 324-45.

Ward, W. Peter. Birth Weight and Economic Growth: Women’s Living Standards in the Industrializing West. Chicago: University of Chicago Press, 1993.

Articles with a Non-western Geographic Focus

Cameron, Noel. “Physical Growth in a Transitional Economy: The Aftermath of South African Apartheid.” Economic and Human Biology 1 (2003): 29-42.

Eltis, David. ‘Welfare Trends among the Yoruba in the Early Nineteenth Century: The Anthropometric Evidence.” Journal of Economic History 50 (1990): 521-40.

Greulich, W.W. “Some Secular Changes in the Growth of American-born and Native Japanese Children.” American Journal of Physical Anthropology 45 (1976): 553-68.

Morgan, Stephen. “Biological Indicators of Change in the Standard of Living in China during the Twentieth Century.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 7-34. Struttart: Franz Steiner, 1998.

Nicholas, Stephen, Robert Gregory, and Sue Kimberley. “The Welfare of Indigenous and White Australians, 1890-1955.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 35-54. Stuttgart: Franz Steiner: 1998.

Salvatore, Ricardo D. “Stature, Nutrition, and Regional Convergence: The Argentine Northwest in the First Half of the Twentieth Century.” Social Science History 28 (2004): 297-324.

Shay, Ted. “The Level of Living in Japan, 1885-1938: New Evidence.’ In The Biological Standard of Living on Three Continents: Further Explorations in Anthropometric History, edited by John Komlos, 173-201. Boulder: Westview Press, 1995.

Articles with a North American Focus

Craig, Lee, and Thomas Weiss. “Nutritional Status and Agriculture Surpluses in antebellum United States.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 190-207. Stuttgart: Franz Steiner, 1998.

Komlos, John, and Peter Coclanis, “On the ‘Puzzling’ Antebellum Cycle of the Biological Standard of Living: The Case of Georgia,” Explorations in Economic History 34 (1997): 433-59.

Komlos, John. “Shrinking in a Growing Economy? The Mystery of Physical Stature during the Industrial Revolution,” Journal of Economic History 58 (1998): 779-802.

Komlos, John. “Toward an Anthropometric History of African-Americans: The Case of the Free Blacks in Antebellum Maryland.” In Strategic Factors in Nineteenth Century American Economic History: A Volume to Honor Robert W. Fogel, edited by Claudia Goldin and Hugh Rockoff, 267-329. Chicago: University of Chicago Press, 1992.

Murray, John. “Standards of the Present for People of the Past: Height, Weight, and Mortality among Men of Amherst College, 1834-1949.” Journal of Economic History 57 (1997): 585-606.

Murray, John. “Stature among Members of a Nineteenth Century American Shaker Commune.” Annals of Human Biology 20 (1993): 121-29.

Steckel, Richard. “A Peculiar Population: The Nutrition, Health, and Mortality of American Slaves from Childhood to Maturity.” Journal of Economic History 46 (1986): 721-41.

Steckel, Richard. “Health and Nutrition in the American Midwest: Evidence from the Height of Ohio National Guardsmen, 1850-1910.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 153-70. Chicago: University of Chicago Press, 1994.

Steckel, Richard. “The Health and Mortality of Women and Children.” Journal of Economic History 48 (1988): 333-45.

Steegmann, A. Theodore Jr. “18th Century British Military Stature: Growth Cessation, Selective Recruiting, Secular Trends, Nutrition at Birth, Cold and Occupation.” Human Biology 57 (1985): 77-95.

Articles with a European Focus

Baten, Jörg. “Economic Development and the Distribution of Nutritional Resources in Bavaria, 1797-1839.” Journal of Income Distribution 9 (2000): 89-106.

Baten, Jörg. “Climate, Grain production, and Nutritional Status in Southern Germany during the XVIIIth Century.” Journal of European Economic History 30 (2001): 9-47.

Baten, Jörg and John Murray “Heights of Men and Women in the Nineteenth-century Bavaria: Economic, Nutritional, and Disease Influences.” Explorations in Economic History 37 (2000): 351-69.

Komlos, John. “Stature and Nutrition in the Habsburg Monarchy: The Standard of Living and Economic Development in the Eighteenth Century.” American Historical Review 90 (1985): 1149-61.

Komlos, John. “The Nutritional Status of French Students.” Journal of Interdisciplinary History 24 (1994): 493-508.

Komlos, John. “The Secular Trend in the Biological Standard of Living in the United Kingdom, 1730-1860.” Economic History Review 46 (1993): 115-44.

Nicholas, Stephen and Deborah Oxley. “The Living Standards of Women during the Industrial Revolution, 1795-1820.” Economic History Review 46 (1993): 723-49.

Nicholas, Stephen and Richard Steckel. “Heights and Living Standards of English Workers during the Early Years of Industrialization, 1770-1815.” Journal of Economic History 51 (1991): 937-57.

Oxley, Deborah. “Living Standards of Women in Prefamine Ireland.” Social Science History 28 (2004): 271-95.

Riggs, Paul. “The Standard of Living in Scotland, 1800-1850.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 60-75. Chicago: University of Chicago Press: 1994.

Sandberg, Lars G. “Soldier, Soldier, What Made You Grow So Tall? A Study of Height, Health and Nutrition in Sweden, 1720-1881.” Economy and History 23 (1980): 91-105.

Steckel, Richard H. “New Light on the ‘Dark Ages’: The Remarkably Tall Stature of Northern European Men during the Medieval Era.” Social Science History 28 (2004): 211-30.

Citation: Cuff, Timothy. “Historical Anthropometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 29, 2004. URL http://eh.net/encyclopedia/historical-anthropometrics/

Attachment Size
Cuff.Anthropometrics.doc 169.5 KB

The Economy of Ancient Greece

Darel Tai Engen, California State University – San Marcos

Introduction 1

The ancient Greek economy is somewhat of an enigma. Given the remoteness of ancient Greek civilization, the evidence is minimal and difficulties of interpretation abound. Ancient Greek civilization flourished from around 776 to 30 B.C. in what are called the Archaic (776-480), Classical (480-323), and Hellenistic (323-30) periods.2 During this time, Greek civilization was very different from our own in a variety of ways. In the Archaic and Classical periods, Greece was not unified but was comprised of hundreds of small, independent poleis or “city-states.” During the Hellenistic period, Greek civilization spread into the Near East and large kingdoms became the norm. Throughout these periods of ancient Greek civilization, the level of technology was nothing like it is today and values developed that shaped the economy in unique ways. Thus, despite over a century of investigation, scholars are still debating the nature of the ancient Greek economy.

Moreover, the evidence is insufficient to employ all but the most basic quantitative methods of modern economic analysis and has forced scholars to employ other more qualitative methods of investigation. This brief article, therefore, will not include any of the statistics, tables, charts, or graphs that normally accompany economic studies. Rather, it will attempt to set out the types of evidence available for studying the ancient Greek economy, to describe briefly the long-running debate about the ancient Greek economy and the most widely accepted model of it, and then to present a basic view of the various sectors of the ancient Greek economy during the three major phases of its history. In addition, reference will be made to some recent scholarly trends in the field.

Sources of Evidence

Although the ancient Greeks achieved a high degree of sophistication in their political, philosophical, and literary analyses and have, therefore, left us with a significant amount of evidence concerning these matters, few Greeks attempted what we would call sophisticated economic analysis. Nonetheless, the ancient Greeks did engage in economic activity. They produced and exchanged goods both in local and long distance trade and had monetary systems to facilitate their exchanges. These activities have left behind material remains and are described in various contexts scattered throughout the extant writings of the ancient Greeks.

Most of our evidence for the ancient Greek economy concerns Athens in the Classical period and includes literary works, such as legal speeches, philosophical dialogues and treatises, historical narratives, and dramas and other poetic writings. Demosthenes, Lysias, Isokrates, and other Attic Orators have left us with numerous speeches, several of which concern economic matters, usually within the context of a lawsuit. But although these speeches illuminate some aspects of ancient Greek contracts, loans, trade, and other economic activity, one must analyze them with care on account of the biases and distortions inherent in legal speeches.

Philosophical works, especially those of Xenophon, Plato, and Aristotle, provide us with an insight into how the ancient Greeks perceived and analyzed economic matters. We learn about the place of economic activities within the Greek city-state, value system, and social and political institutions. One drawback of such evidence, however, is that the authors of these works were without exception members of the elite, and their political perspective and disdain for day-to-day economic activity should not necessarily be taken to represent the views of all or even the majority of ancient Greeks.

The ancient Greek historians concerned themselves primarily with politics and warfare. But within these contexts, one can find bits of information here and there about public finance and other economic matters. Thucydides, for example, does takes care to describe the financial resources of Athens during the Peloponnesian War.

Poems and dramas also contain evidence concerning the ancient Greek economy. One can find random references to trade, manufacturing, the status of businessmen, and other economic matters. Of course, one must be careful to account for genre and audience in addition to the personal perspective of the author when using such sources for information about the economy. The plays of Aristophanes, for example, make many references to economic activities, but such references are often characterized by stereotyping and exaggeration for comedic purposes.

One of the most extensive collections of economic documents is the papyri from Greek-controlled Egypt during the Hellenistic period. The Ptolemaic dynasty that ruled Egypt developed an extensive bureaucracy to oversee numerous economic activities and like all bureaucracies, they kept detailed records of their administration. Thus, the papyri include information about such things as taxes, government-controlled lands and labor, and the unique numismatic policies of the Ptolemies.

Epigraphic evidence comes in the form of stone inscriptions from public and private institutions. Boundary markers placed on land used as security for loans, called horoi, were often inscribed with the terms of the loans. States such as Athens inscribed honorary decrees for those who had done outstanding services for the state, including economic ones. States also inscribed accounts for public building projects and leases of public lands or mines. In addition, religious sanctuaries frequently inscribed accounts of monies and other assets, such as produce, land, and buildings, under their control. Although accounts tend to be free of human biases, honorary decrees are much more complex and the historian must be careful to consider the perspective of their issuing institutions when interpreting them.

Archaeological evidence is free of some of the representational complexities of the literary and epigraphic evidence. Pottery finds can tell us about pottery manufacture and trade. The vase types indicate the goods they contained, such as olive oil, wine, or grain. The distribution of finds of ancient pottery can, therefore, tell us the extent of trade in various goods. Finds of hoarded coins are also invaluable for the information they reveal about the volume of coins minted by a given state at a given time and the extent to which a state’s coinage was distributed geographically. But such archaeological evidence is not without its drawbacks as well. The same “muteness” that frees such evidence from human biases also makes it incapable of telling us who traded the goods, why they were traded, how they were traded, how much they cost, and how many middlemen they went through before reaching their find spots. Furthermore, it is always dangerous to attempt to extrapolate broad conclusions about the economy from a small number of finds, since we can never be sure if those finds are representative of larger phenomena or merely exceptional cases that archaeologists happened to stumble upon.

Some of the most spectacular and informative finds in recent years have been made under the waters of the Mediterranean, Aegean, and Black Seas by what is known as marine (or nautical) archaeology. Ancient shipwrecks containing goods for trade have opened new doors to the study of ancient Greek merchant vessels, manufacturing, and trade. Although the field is relatively new, it has already yielded much new data and promises great things for the future.

The Debate about the Ancient Greek Economy

As stated above, the ancient Greek economy has been the subject of a long-running debate that continues to this day. Briefly stated, the debate began in the late nineteenth century and revolved around the issue of whether the economy was “primitive” or “modern.” These were a poor choice of terms with which to conceptualize the ancient Greek economy and are to a great extent responsible for the intractability of the debate. These terms are clearly normative in character so that essentially the argument was about whether the ancient Greek economy was like our “modern” economy, which was never carefully defined, but apparently assumed to be a free enterprise, capitalistic one with interconnected price-making markets. In addition, confusion arose over whether the ancient Greek economy was like a modern economy in quantity (scale) or quality (its organizing principles). Lastly, such terms clearly attempt to characterize the ancient Greek economy as a whole and do not distinguish differences among regions or city-states of Greece, time periods, or sectors of the economy (agriculture, banking, long distance trade, etc.).

Seeing extensive trade and use of money in Greece from the fifth century B.C. onward, the modernists extrapolated the existence of a market economy in Classical Greece. On the other hand, seeing traditional Greek social and political values that disdained the productive, impersonal, and industrial nature of modern market economies, the primitivists downplayed the existence of extensive trade and the use of money in the economy. Neither primitivists nor modernists could conceive of the existence of extensive trade and the use of money unless the ancient Greek economy was organized according to market principles. Moreover, neither side in the debate could call activities “economic” unless such activities were productive and aimed at growth.

Historical methods were also a factor in the debate. Traditional ancient historians who relied on philology and archaeology tended to side with the modernist interpretation, whereas historians who employed new methods drawn from sociology and anthropology tended to hold to the primitivist view. For example, Michael Rostovtzeff assembled a wealth of archaeological data to argue that the scale of the ancient Greek economy in the Hellenistic period was so great that it could not be considered primitive. On the other hand, Johannes Hasebroek used sociological methods developed by Max Weber to argue that the ancient Greek citizen was a homo politicus (“political man”) and not a homo economicus (“economic man”) – he disdained economic activities and subordinated them to traditional political interests.

A turning point in the debate came with the work of Karl Polanyi who drew on anthropological methods to argue that economies need not be organized according to the independent and self-regulating institutions of a market system. He distinguished between “substantivist” and “formalist” economic analysis. The latter, which is typical of economic analysis today, is appropriate only for market economies. Market economies operate independently of non-economic institutions and their most characteristic feature is that prices are set according to an aggregate derived from the impersonal forces of supply and demand among a group of interconnected markets. But material goods may be produced, exchanged, and valued by means other than market institutions. Such means may be tied to non-economic social and political institutions, including gift exchange or state-controlled redistribution and price-setting. Hence, other tools of analysis, namely “substantivist” economics, must be employed to understand them. Polanyi concluded that ancient Greece did not have a developed market system until the Hellenistic period. Before that time, the economy of ancient Greece did not comprise an independent sphere of institutions, but rather was “embedded” in other social and political institutions. Thus, Polanyi opened the door through which scholars could begin to examine the ancient Greek economy free from the normative parameters originally imposed on the debate. Unfortunately, the grip of the old parameters has been very strong and the debate has never completely freed itself from their influence.

The Finley Model and Its Aftermath

At present the most widely accepted model of the ancient Greek economy is that which was first set forth by Moses Finley in 1973. This view owes much to the Weber-Hasebroek-Polanyi line of analysis and holds that the ancient Greek economy was fundamentally different from the market economy that predominates in most of the world today. Not only was the ancient Greek economy much smaller in scale than economies today, it also differed greatly in quality.

Although the ancient Greek word oikonomia is the root of our modern English word “economy,” the two words are not synonymous. Whereas today “economy” refers to a distinct sphere of human interactions involving the production, distribution, and consumption of goods and services, oikonomia meant “household management,” a familial activity that was subsumed or “embedded” in traditional social and political institutions. True, the Greeks produced and consumed goods, engaged in various forms of exchanges including long-distance trade, and developed monetary systems employing coinage, but they did not see such activities as being part of a distinct institution which we call the “economy.”

According to Finley’s model, the subordination of economic activities to social and political ones was a byproduct of a Greek value system that emphasized the wellbeing of the community over that of the individual. Economic activity was necessary in this system only in so far as the individual male citizen had to provide sustenance for himself and his family. This could be accomplished simply by farming a small plot of land. Beyond that, the male citizen was expected to devote himself to the wellbeing of the community by participating in the public religious, political, and military life of the polis.

On the other hand, ancient Greek values held in low esteem economic activities that were not subordinated to the traditional activities of managing the family farm and obtaining goods for necessary consumption. So-called banausic work, which included manufacturing, business, and trade (which were not tied to the land and the family farm), and what we would call “capitalism” (investing money to make more money) were considered to be incompatible with active participation in the affairs of the polis and even as unnatural and morally corrupting. A life on the land, farming to produce only so much as was needed for consumption and leaving enough leisure time for active participation in the public life of the polis, was the social ideal. Production and exchange were to be undertaken only for personal need, to help out friends, or to benefit the community as a whole. Such activities were not to be undertaken simply to make a profit and certainly not to obtain capital for future investment and economic growth.

Given the limits put on economic activity by traditional values and the absence of a modern conception of the economy, agriculture comprised the bulk of production and exchange. Most production, therefore, was carried out in the countryside and cities were net consumers rather than producers, living off the surplus of the countryside. With limited technology and no understanding of economies of scale, cities were not hubs of industry, and manufacturing existed only on a small scale. Cities were mainly places for people to live as well as religious and governmental centers. Their contribution to the economy was only to demand the surplus produce of the countryside, manufacture limited amounts of goods, and provide market places and ports of trade for the exchange of goods.

Since the bulk of economic wealth was produced from the land and banausic occupations were not esteemed, the elite of ancient Greek society were landowners who consequently dominated politics, even in democratic poleis like Athens. Such men had little interest in manufacturing, business, and trade and, like their society as a whole, did not consider the economy as a distinct sphere separate from social and political concerns. Thus, their official policies with regard to the economy were much different from that of modern states.

Modern states undertake policies with specifically economic goals, desiring in particular to make their national economy more productive, to expand or grow, thereby increasing the per capita wealth of the state. Ancient Greek city-states, on the other hand, had an interest and involvement in what we would call economic activities (trade, minting coins, production, etc.) that, like oikonomia on the household level, were consumptive in nature and fulfilled traditional social and political needs, not strictly economic ones.

Finley’s model also holds that there was neither a “market mentality” nor interconnected markets that could operate according to impersonal price-setting market mechanisms. Individual city-states certainly had “market places” (agorai), but such markets existed largely in isolation with minimal connections among them. Thus, prices were set according to local conditions and personal relationships rather than in accordance with the impersonal forces of supply and demand. This was so in part because of the Greek socio-political emphasis on self-sufficiency (autarkeia), but also because the physical environment and industry of the eastern Mediterranean tended to produce similar goods, so that there were few items that a city-state needed which could not be obtained from within its own boundaries.

Moreover, according to Finley’s model, the interests of Greek city-states in trade were likewise limited by traditional political concerns to the consumptive goals of ensuring the import of adequate supplies of “material wants,” such as food at reasonable prices for their citizens, and revenue which could be obtained from taxes on trade. The former goal could be fulfilled by making laws that required or provided incentives for traders to bring grain into the city. Laws such as these were merely extensions of traditional political policies, like conquest and plunder, but in which a less violent form of acquisition would now be undertaken. But though the means had changed, the ends were still political; there was no interest in the economy per se. The same holds true for the traditional need of city-states for revenue to pay for public projects, such as temple building and road maintenance. Here again, old and often violent methods of obtaining revenue were augmented through such things as taxes on trade.

Finley’s model has had a great impact on those who study the ancient Greek economy and is still widely accepted today. But although the general picture it presents of the ancient Greek economy has not been superceded, the model is not without flaws. It was inevitable that Finley would overstate his model, since it attempted to encompass the general character of the ancient Greek economy as a whole. Thus, the model makes little distinction between different regions or city-states of Greece, even though it is clear that the economies of Athens and Sparta, for example, were quite different in many respects. Finley also treats the various sectors of the economy (agriculture, labor, manufacturing, long-distance trade, banking, etc.) as if they were all governed equally in accordance with the general tenets of the model, despite the fact that, for example, there were significant differences between the values that applied in the landed economy and those that prevailed in overseas trade. Lastly, Finley’s model is synchronic and hardly acknowledges changes in both the quantity and the quality of the economy over time.

Some close examinations of the various sectors of the ancient Greek economy in different places and at different times have supported Finley’s model in its general outlines. But they have been matched by just as many studies that have revealed exceptions to the model. Thus, one recent trend in the scholarship has been to try to revise the Finley model in light of focused studies of particular sectors of the economy at specific times and places. Another trend has been simply to ignore the Finley model and bypass the old debate altogether by examining the ancient Greek economy in ways that make them irrelevant. Basically, given the quantity and the quality of the available evidence, our attempts to understand the ancient Greek economy are greatly affected by the perspective from which we approach it. We can choose to try to characterize the entire ancient Greek economy in general, to see the forest as it were, and debate whether it was more or less similar to our own. Or we can focus in on the trees and undertake narrow studies of particular sectors of the ancient Greek economy at specific times and places. Both approaches are useful and not necessarily mutually exclusive.

The Archaic Period

Finley’s model holds most true for the Archaic period (c. 776-480 B.C.) of ancient Greek history. Archaeological evidence and literary references from such works as the epic poems of Homer (the Iliad and the Odyssey), the Works and Days of Hesiod, and the works of the lyric poets attest to an economy that was generally small in scale and centered on household production and consumption. This is not surprising, since it was during the Archaic period that Greek civilization was re-emerging from a “Dark Age” of upheaval and forming its basic social, legal, political, and economic institutions. The fundamental political unit, the polis or independent city-state, appears at this time as do non-monarchal governments allowing for at least some degree of political participation among a broad swath of citizens.

For the most part, governments did not actively involve themselves in economic matters, except during the occasional political upheavals between “haves” and “have-nots” in which land might be confiscated from the few and redistributed to the many. Despite the fact that much of the Greek mainland is mountainous and the rivers generally small, there was enough fertile land and winter rainfall so that agriculture could account for the bulk of economic production, as it would in all civilizations before the modern industrial era. But unlike the large kingdoms of the Near East, Greece had a free-enterprise economy and most land was privately owned. Agriculture was carried out primarily on small family farms, though the Homeric epics indicate that there were also some larger estates controlled by the elite and worked with the help of free landless thetes whose labor would be needed especially at harvest time. Slaves existed, but not in such large numbers as to make the economy and society dependent on them.

As the populations of cities were fairly small, crafts and manufacturing were largely carried out within households for internal consumption. Both literary accounts and material remains, however, indicate that there was a certain amount of specialization. Artisans are referred to in the Homeric epics and the level of craftsmanship seen on items, such as metal work and painted pottery, was not likely to have been accomplished by non-specialists. Nevertheless, without large-scale manufacturing, safety from brigands on land and pirates at sea, and a monetary system employing coinage (until late in the sixth century), markets were necessarily small, devoted to local products, and certainly not interconnected into a price-setting market economy. Trade was limited mostly to local exchanges between the countryside and the urban center of city-states. Farmers might load up their surplus goods on a small ship to sell them in a neighboring city, as Hesiod attests, but long-distance sea-borne trade was devoted almost exclusively to luxury items, such as precious metals, jewelry, and finely-painted pottery. Moreover, gift exchanges in accordance with social traditions were as prominent if not more so than impersonal exchanges for profit. In general, those who engaged in banausic occupations on more than a part-time basis and sought profit from such activities were looked down on and did not hold positions of prestige in society or government.

Nevertheless, it cannot be denied that the scale of the Greek economy grew during the Archaic period and if not per capita, at least in proportion to the clear growth in population. Population increases and the desire for more land were the primary impetuses for a colonizing movement that established Greek poleis throughout the Mediterranean and Black Sea regions during this period. These new city-states put more land under cultivation, thereby providing the agriculture necessary to sustain the growing population. Moreover, archaeological evidence for the dispersal of Greek products (particularly pottery) over a wide area indicate that trade and manufacturing had also expanded greatly since the Dark Age. It is probably no coincidence that the end of the Archaic period witnessed for the first time a divergence between the designs of merchant vessels and warships, a distinction that would become permanent. Also, after the invention of coinage in Asia Minor in the early sixth century B.C., even though various other forms of money and barter continued to be employed throughout the course of ancient Greek history, the Greeks were quick to adopt coinage and it became the predominant means of exchange from the end of the sixth century onward. The aforementioned economic trends are traced in an important recent book by David Tandy, who argues that they had a fundamental impact on the development of the social and political organization and values of the Archaic polis.

Key Economic Sectors of the Classical Period

During the Classical period of ancient Greek history (480-323 B.C.), continued increases in population as well as political developments influenced various sectors of the economy to the extent that one can see a growing number of deviations from the Finley model. Evidence concerning the economy also becomes more abundant and informative. Thus, a more detailed description of the economy during the Classical period is possible and more attention to the distinctions between its various sectors is also desirable.

In light of the cautionary statements made earlier in this article about overgeneralization, it is important to note that great variation existed among the regions and city-states of the ancient Greek world, especially during the Classical period. Athens and Sparta are famous examples of two almost polar opposites in their social and political organizations and this is no less true with regard to their economic institutions. Given, however, the fact that Athens is the best documented and most studied place in ancient Greek history, the various sectors of the ancient Greek economy during the Classical period will be discussed primarily as they existed in Athens, despite the fact that it was in many ways exceptional. Significant variations from the Athenian example will be noted, however, as will some recent trends in scholarship.

Public and Private Economic Sectors

It is first necessary to distinguish between the public and private sectors of the economy. Throughout most of ancient Greek history before the Hellenistic period, a free enterprise economy with private property and limited government intervention predominated. This places Greece in sharp contrast to most other ancient civilizations, in which governmental or religious institutions tended to dominate the economy. The main economic concerns of the governments of the Greek city-states were to maintain harmony within the private economy (make laws, adjudicate disputes, and protect private property rights), make sure that food was available to their citizenries at reasonable prices, and obtain revenue from economic activities (through taxes) to pay for government expenses.

Athens had numerous laws to protect private property rights and had officials and law courts to enforce them. In addition, there were officials who oversaw such things as weights, measures, and coinage to make sure that people were not cheated in the market place. Athens also had laws to ensure an adequate supply of grain for its citizens, such as a law against the export of grain and laws to encourage traders to import grain. Athens even had agreements with other states in which the latter gave favorable treatment to traders bound for Athens with grain.

On the other hand, Athens did not tax its citizens directly except in cases of state emergencies (eisphorai) and in requiring the wealthiest citizens to perform public services (liturgies). Most taxes were indirect: market taxes, port taxes, import-export taxes, and taxes on foreigners who took up long-term residence in Athens. Taxes were collected by companies of private tax farmers who bid on contracts issued by the state. In addition to taxes, Athens obtained revenue from leases of publicly owned lands and mines. Revenue was necessary for various government expenditures, including administrative costs, public festivals, and maintenance of widows and orphans of soldiers who died in battle as well as building ships’ hulls for the navy, walls for the city, and temples for the gods. Such state expenditures could have a significant impact on the economy, as is clear from the large quantities of money and labor that appear in the inscribed accounts of the building projects on the Athenian acropolis.

Although the Finley model is right in many respects with regard to the limited interest and involvement of the state in the economy, one recent trend has been to show through carefully focused examinations of specific phenomena that Finley pressed his case too far. For example, Finley drew too sharp a distinction between the interests of non-citizen (and, therefore, non-landowning) traders and the landed citizens who dominated Athenian government. It is true that the latter might not have exactly the same economic interests as the former, but the interests of the two were nevertheless complementary, for how could Athens get the grain imports it required without making it in the interest of traders to bring it to Athens?. Moreover, it has been argued that the policies of Athens with regard to its coinage betray a state interest in the export of at least one locally produced commodity (namely silver), something completely discounted by the Finley model.

But again, Finley was probably right to argue that during the Archaic and Classical periods the vast majority of economic activity was left untouched by government and carried out by private individuals. On the other hand, by the Classical period a self-sufficient household economy was an ideal that was becoming increasingly difficult to maintain as the various sectors of economic activity became more specialized, more impersonal, and more profit oriented as well.

Land

As in the Archaic period, the most important economic sector was still tied to the land and the majority of agriculture continued to be carried out on the subsistence level by numerous small family farms, even though the distribution of land among the population was far from equal. Primary crops were grains, mostly barley but also some wheat, which were usually sown on a two-year fallowing cycle. Olives and grapes were also widely produced throughout Greece on land unsuitable for grains. Animal husbandry focused on sheep and goats, which could be moved from their winter lowland pasturage to the moister and cooler mountainous regions during the hot summer months. Cattle, horses, and donkeys, though less numerous, were also significant. While usually sufficient to support the population of ancient Greece, unpredictable rainfall made agriculture precarious and there is much evidence for periodic crop failures, shortages, and famines. Consequently, competition for fertile land was a hallmark of Greek history and the cause of much social and political strife within and between city-states.

One recent trend in the study of ancient Greek agriculture is the use of ethnoarchaeology, which attempts to understand the ancient economy through comparative data from better-documented modern peasant economies. In general, studies employing this method have supported the prevailing view of subsistence agriculture in ancient Greece. But caution is necessary, since there have been changes in the physical environment of and settlement patterns in Greece over time that can skew comparative analyses. Ethnoarchaeology has also been used to show that Greek farmers in both ancient and modern times have had to be flexible in their responses to wide variations in local topographical and climatic conditions and, thus, varied their crops and fallowing regimes to a significant degree. Rational exploitation of fluctuations in production brought on by such variations might have been the means by which some farmers were able to obtain enough wealth to rise above their peers and become members of a landed elite and this might point to a productive mentality at odds with the Finley model.

Metals were another important landed resource of Greece and so mining occupied an important place in the economy. Ancient Greeks typically used bronze and iron tools and weapons. There is little evidence that copper, the principal metal in bronze, was ever mined in abundance on mainland Greece. It had to be imported from the island of Cyprus, where it existed in large quantities, and other more distant regions. Tin, the other metal in bronze, was also rare in Greece and had to be imported from as far away as Britain. Iron is relatively plentiful throughout Greece and there is archaeological evidence of iron mining; however, literary references to it are few and so we know little about the process.

Precious metals were used in jewelry, art, and coinage. Athens had an abundance of silver and we know much about its mining industry from surviving inscriptions of government mine leases to private entrepreneurs. The mines were extremely productive, providing Athens with an income of 200 talents per year for twelve years from 338 B.C. onward. One talent was the equivalent of around nine year’s worth of wages for single skilled laborer working five days a week, 52 weeks a year, according to the wage rates we know from 377 B.C. Though productive in silver, ancient Greece was not as rich in gold, which was found primarily in Thrace and on the islands of Thasos and Siphnos.

Recent scholarship continues to focus on the silver mines of Athens, drawing not only on the inscribed mine leases, but also on extensive archaeological investigation of the mines themselves. They tend to indicate that, contrary to the Finley model, mining in Athens was specialized enough and extensive enough to constitute an “industry” in the modern sense of the word and one geared toward growth. In a study of mine-leasing records Kirsty Shipton has shown that the elite of Athens preferred mines leases, with their potential for greater profits, to land leases. Thus, the traditional preference of the elite for the consumptive acquisition of land and disdain for productive investments for profit postulated by the Finley model might be a characteristic feature of the ancient Greek world as a whole, but it does not entirely hold for Athens in the Classical period.

Stone for building and sculpture was another valuable natural resource of Greece. Limestone was available in abundance and fine marble could be found in Athens on the slopes of Mount Pentelikos and on the island of Paros. The former was used in building the Parthenon and the other structures of the Athenian acropolis while the latter was often used for the most famous ancient Greek free-standing and relief sculptures.

Labor

It is notoriously difficult to estimate the population of Athens or any other Greek city-state in ancient times. Generally accepted figures for Athens at the height of its power and prosperity in 431 B.C., though, are in the range of approximately 305,000 people, of which perhaps 160,000 were citizens (40,000 male, 40,000 female, 80,000 children), 25,000 were free resident foreigners (metics), and 120,000 were slaves. Athens was the largest polis and the populations of most city-states were probably much smaller. Citizens, metics, and slaves all performed labor in the economy. In addition, many city-states included forms of dependent labor somewhere in between slave and free.

As stated above, much of the agriculture of ancient Greece was carried out by small farmers who were exclusively free citizens, since non-citizens were barred from owning land. But although being a farmer was the social ideal, good land was scarce in Greece and it is estimated that in Athens about a quarter of the male citizens did not own land and had to take up other occupations for their livelihoods. Such occupations existed in the manufacturing, service, retail, and trade sectors. These “business” occupations were not only socially disesteemed, but they also tended to be small scale. Wage earning was very much looked down upon, since working for another person was thought of as an impingement on freedom and akin to slavery. Thus, free men doing the same work side by side with metics and slaves on the Acropolis building projects earned the same wages. Yet wages appear to have been adequate to make a living. In Athens the typical wage for a skilled laborer was one drachma per day at the end of the fifth century and two and a half drachmai in 377. In the fifth century a Greek soldier on campaign received a ration of 1 choinix of wheat per day. The price of wheat in Athens at the end of the fifth century was 3 drachmai per medimnos. There are 48 choinices in a medimnos. Thus, one drachma could buy enough food for 16 days for one person, four days for a family of four.

One thing that made up for the limited number of free citizens who were willing or had to become businessmen or wage earners was the existence of metics, foreign-born, free non-citizens who took up residence in a city-state. It is estimated that Athens had about 25,000 metics at its height and since they were barred from owning land, they engaged in banausic occupations that tended to be looked down upon by the free citizenry. The economic opportunities afforded by such occupations in Athens and other port cities where they were particularly abundant must have been significant. They attracted metics despite the fact that metics had to pay a special poll tax and serve in the military even though they could not own land or participate in politics and had to have a citizen represent them in legal matters. This is confirmed by the numerous metics in Athens who became wealthy and whose names we know, such as the bankers Pasion and Phormion and the shield-maker Cephalus, the father of the orator, Lysias.

Foreign-born, free non-citizen transients known as xenoi also played an important role in the ancient Greek economy, since it is apparent that many, though certainly not all, those who carried out long-distance trade were such men. Like metics, they too were subject to special taxes, but few rights.

Slaves comprised an undeniably large part of the labor force of ancient Greece. In fact, it is fair to say, as Finley did, that ancient Greece was a “slave dependent society.” There were so many slaves; they were so essential to the economy; and they became so thoroughly embedded into the every day life and values of the society that without slavery, ancient Greek civilization could not have existed in the manner it did. In Classical Athens it has been estimated that there were around 120,000 slaves. Thus, slaves comprised over a third of the total population and outnumbered adult male citizens by three to one.

The slaves of Athens were chattel, that is the private property of their owners, and had few, if any, rights. The demand for them was high as they performed almost every kind of work imaginable from agricultural labor to mining labor to shop assistants to domestic labor even to serving as the police force and secretaries for the government in Athens. About the only thing slaves did not normally do was military service, except in emergencies, when they did that too.

Slaves were supplied by a variety of sources. Many were war captives. Some were enslaved for failure to pay debts, though this was outlawed in Athens in the early sixth century B.C. Some were foundlings, abandoned children rescued and reared in return for their labor as slaves. Of course, the children of slaves would also be slaves. In addition, there was an extensive and regular slave trade that trafficked in people who had become slaves by all the means mentioned previously.

In part because of the diverse means by which slaves were supplied, there was no particular race that was singled out for enslavement. Anyone could become a slave if unfortunate enough, including Greeks. It does appear, however, that a large percentage of slaves in Greece originated in the Black Sea and Danubian regions. In most cases they were probably captives from internecine tribal wars and sold to slave traders who shipped them to various parts of the Greek world.

The treatment of chattel slaves varied, depending on the whims of individual slave owners and the types of jobs done by the slaves. Slaves who worked in the silver mines of Athens, for example, worked in dangerous conditions in large numbers (as many as 10,000 at a time) and had virtually no contact with their owners that could result in human bonds of affection (they were usually leased out). On the other hand, slaves who worked in households assisting the matron of the family in her household tasks were probably treated much better as a rule. Their labor was less strenuous and since they worked in close proximity with their owners’ families, at least some human bonds of affection were likely to form between them and their owners. Some slaves even lived on their own and ran their owners’ businesses largely unsupervised.

One aspect of ancient Greek slavery that is often cited as evidence for it being more “humane” than other slavery regimes is manumission. There is enough evidence for slaves being freed to make us believe that manumission was not uncommon and many slaves could probably hope for freedom, even if most of them never actually obtained it. But manumission was quite self-serving for slave owners, since it made slaves much less likely to risk rebellion in the hope that they might some day be given their freedom. As it turns out, there were only two noteworthy large-scale rebellions of chattel slaves in the history of ancient Greece. Moreover, inscriptions from the religious sanctuary of Delphi from the Hellenistic period show that slaves almost always had to compensate their owners for their freedom, either in the form of cash or some other valuable commodity, like their own children, who would also be slaves of the master and eventually replace their aging parents with young labor. So it is a dubious matter to say that the manumission of slaves is a testament to the humanity of ancient Greek slavery. Individual slaves might benefit, but the practice allowed the institution of slavery to flourish throughout Greek history.

When slaves were freed, they did not become citizens, but rather metics. Yet even though they still could not possess the full rights and privileges of citizens, they could prosper economically, just as other metics could. In Athens the prominent and wealthy metic banker, Pasion, for example, was originally a slave who assisted his masters Antisthenes and Archestratus. By the terms of his will, Pasion in turn manumitted his own slave assistant, Phormion, and not only left him his bank, but also stipulated that Phormion marry his widow and manage the inheritance of his son, Apollodorus.

In addition to chattel slavery, there were other forms of dependent labor in the ancient Greek world. One famous example is helotry, known principally from the city-state of Sparta. The helots of Sparta were agricultural serfs, indigenous peoples conquered by the Spartans and forced to work their former lands for their Spartan overlords. They were not the private property of the individual Spartans, who were allotted the former lands of the helots, and could not be bought or sold. But their mobility was completely restricted; they had very few rights; they had to turn over a large percentage of their produce to their Spartan overlords; and they were routinely terrorized as a matter Spartan state policy. The one drawback for the Spartans of using helot labor, though, was that the helots, living still on their former homeland and having a sense of ethnic unity, were prone to revolt and did so on several occasions at great cost both to themselves and to the Spartans.

With the exception of Sparta and a few other city-states, women in ancient Greece, free citizens or otherwise, could not control land. They could own it in name only and were not allowed to dispose of it as they saw fit, but were legally obliged to yield control of it to a male representative. Since land was the chief source of wealth in the ancient Greek economy, the inability to control it severely constrained the economic role of women. The ideal was for women to get married, have children, raise them, and carry out the indoor tasks of the household, such as cooking and textile production.

Of course, not all women could live up to such an ideal at all times. Women undoubtedly helped outdoors on the farm during harvest time. Those of poorer families might by necessity have to sell in the market place what little surplus produce their households could generate or perform service-oriented jobs for others for wages. Female metics and slaves did similar work and also comprised the majority of the prostitutes of Athens, which was a legal profession. Prostitutes, though, ranged from lowly brothel workers to high-class call girls, the latter of which, such as Aspasia, sometimes obtained prominence in Athenian society.

Despite their disdain for certain types of work and their dependence on slave labor, most Greeks had to work hard to make a living. Yet they did not develop a “work ethic” and did not consider work to be ennobling, but simply necessary. Hence, if one could afford a slave to do one’s work, then one bought a slave. The availability of cheap slaves was a major factor in Greek attitudes toward labor and may also explain why there were no labor unions in Greece. For how could wage-earners pressure their employers for better conditions or wages when the latter could always replace them with slaves if necessary?

Manufacturing

Slavery also affected manufacturing in ancient Greece. It is often said that technology and industrial organization stagnated in ancient Greece because the availability of cheap slave labor obviated any imminent need to improve them. If one wanted to produce more, one merely bought a few more slaves. Thus, most manufactured products were literally hand-made with simple tools. There were no assembly lines and no big factories. The largest manufacturing establishment we know of was a shield factory owned by the metic, Cephalus, the father of the orator, Lysias, which employed 120 slaves. Most manufacturing was carried out in small shops or within households. Hence, in comparison with agriculture, manufacturing comprised a small part of the ancient Greek economy.

Nevertheless, documentary and archaeological evidence attests to a wide variety of manufactured items and some in large quantities. Among the most extensively manufactured products was clay pottery, the remains of which archaeologists have found scattered throughout the Mediterranean world. The wheel-made pots took many shapes appropriate for their contents and use, which ranged from hydria for water to amphorae for olive oil and wine to pithoi for grain to aryballoi for perfume to kylikes for drinking cups. Finely painted vases were also manufactured for decorative and ritual purposes. The finest, most numerous, and widely dispersed of these were made in Corinth, Aegina, Athens, and Rhodes.

Literary accounts as well as scenes from painted vases make it clear that the ancient Greeks left textile production largely to women. The principal material they worked with was wool, but linen from flax was also common. Textiles were used in turn in the manufacture of clothing. Again, women were largely responsible for this and it was done primarily within the household. Textiles were often dyed, the most desirable dye being a reddish purple color derived from aquatic murex snails. These had to be harvested, mashed into a jelly, and then boiled to extract the dye.

Although the trees of Greece were for the most part not particularly good for woodworking materials and especially not for large-scale building, the Greeks did use wood extensively and, therefore, had to import good timber from places like Macedonia, the Black Sea region, and Asia Minor. Given the countless islands of Greece, it is not surprising that shipbuilding was an important sector of manufacturing. Vessels were needed for commercial as well as military uses. In Athens the state obtained the necessary timber for the ships (and oars) of its navy, but it contracted with carpenters who worked under the supervision of state officials to craft the timber into the warships that were so vital for Athenian power in the Classical period.

Buildings ranged from private houses to monumental stone temples. The former tended to be rather humble, made of unbaked mud brick laid on a stone foundation and covered by a thatched or tiled roof. On the other hand, the great temples of ancient Greece required much organization, many resources, and incredible technical skill. As is evidenced by the extant accounts for the construction of the buildings of the Athenian acropolis, the work was normally contracted out in small units to private individuals who either worked alone or in charge of others to do anything from quarrying marble to transporting wooden beams to sculpting facades. The degree of specialization varied. In some cases we see contractors carrying out a variety of tasks, whereas in others we see them specializing in only one.

Metal crafts were highly specialized. The Greeks smelted iron, but only in wrought form. They were unable to achieve furnace temperatures high enough to make pig iron and did not have the technical know-how to add carbon to the smelting process with enough precision to make steel with any consistency. Blacksmiths crafted body armor, shields, spears, swords, farm implements, and household utensils. Bronze casting reached the level of fine art in Classical Greece. Sculptors used the lost-wax method, in which they first made a clay model of a statue, then covered the model with a layer of wax, which they then covered again with another layer of clay. Small openings were left in the outer clay covering, into which molten bronze was poured. The hot molten bronze melted the wax, which then flowed out another opening in the outer clay covering. After the bronze cooled the outer clay covering was broken off, leaving the cast bronze.

It is clear that in the Classical period in Athens there was much specialization in manufacturing and that the quantity of goods was far greater than that which could have been produced in a purely “household economy.” At the same time, however, the scale and organization of manufacturing was a far cry from those of industrialized civilizations of recent centuries.

Markets and Prices

According to the Finley model, there was no network of interconnected markets to form a price-setting market economy in the ancient Greek world. Although this is true for the most part, like other aspects of the Finley model, the case is overstated. There do, for example, appear to be connections between markets for some commodities, such as grain and probably precious metals as well. In the case of grain, it can be shown that supply and demand over long-distances did have an impact on prices and traders sought to take advantage of the lag-time between price adjustments in order to make a profit. Obviously, though, this is nothing like the modern world in which the price of crude oil changes instantly worldwide in reaction to a change in supply from one of the major producers. For the most part in ancient Greece, prices were set in accordance with local conditions, personal relationships, and haggling.

Government price-fixing was limited. Although there is evidence that Athens, for example, fixed the retail price of bread in proportion to the wholesale price of grain, there is no evidence that it fixed the price of the latter. Even in times of severe grain shortages, Athens was content to allow traders bringing grain to Athens to charge the going rate. In such cases, the state alleviated the crises for its citizens by paying the going rate for the grain and then reselling it to its citizenry at a lower price.

Despite the general absence of interconnected markets, however, there were market places. Each city-state had at least one market place (agora) in the heart of city and a port market (emporion) as well, if it had a good harbor. The agora was a place of much activity, serving not only as a center of economic exchange, but also as a political, religious, and social center. In the agora one could find law courts, offices for public officials, and coin mints as well as shrines and temples. In fact, agorai were considered sacred places to the degree that they were marked off with boundary stones across which no one who had the stain of religious pollution could cross. Within the agora economic activities were segregated by types of goods, services, and labor so that there were specific places where one could regularly find the fishmongers, blacksmiths, money changers, and so on.

Ancient Greek city-states regulated the economic activities that took place in their markets to a certain degree. Public officials oversaw weights, measures, scales, and coinage to limit and resolve disputes in exchanges as well as to ensure state interests. For example, Athens employed a publicly owned slave to check coins and guard against counterfeiters. In this way, Athens protected the integrity of its own coinage as well as the interests of buyers and sellers. The state ensured the affordability of key goods, such as bread, by fixing its retail prices relative to the wholesale price of grain. Various activities in the market place were also taxed by the state. Port and transit taxes affected exchanges in emporia like the Piraeus of Athens and xenoi had to pay a special tax for engaging in transactions in the agora.

Trade

Local trade between countryside and urban center and on the retail level within cities continued largely as it had in the Archaic period. But rather than producers transporting and selling their surplus goods directly in city markets, specialized retailers (kapeloi) who profited as middlemen between producers and consumers became more the norm. Local trade goods could be probably transported over short distances on land. But long-distance trade over land was difficult and time consuming, given the mountainous topography of Greece and the fact that the fragmented city-states of Greece never built an extensive system of paved roads that tied them together in the manner of the Roman Empire. Most “roads” between cities were single track and suitable only for pack animals, though there were some on which wheeled carts could be pulled by oxen, donkeys, or mules.

Long-distance trade was primarily done by merchant ships over the waters of the Aegean, Mediterranean, and Black Seas. Evidence from the Attic Orators indicates that during the Classical period overseas trade developed into a specialized and important sector of the economy. Trade was carried out by private individuals and not organized by the state. A typical trading venture involved a non-citizen trader (emporos) who either owned his own ship or rented space on a ship owned by another (naukleros). In most cases described by the orators, the traders typically borrowed money from a citizen lender to finance the venture. There is some dispute among scholars whether such loans constituted productive borrowing on the part of the traders or were just a type of insurance, because the loans would only have to be repaid if the ship and cargo reached their contracted destinations. From the perspective of the lenders, the loans were certainly productive, since they charged interest at a rate much higher than that which applied to loans on the security of land, anywhere from 12 to 30%.

Marine archaeology has recently increased our knowledge of merchant vessels and their cargoes tenfold by the discovery of several ancient shipwrecks. The ships appear to have been generally small by modern standards. In 1968 the well-preserved wreck of a merchant ship from c. 300 B.C. was found off the coast of Kyrenia in Cyprus. Being only 35 feet long and 15 feet wide with a capacity of 30 tons, it is probably the kind of merchant vessel that made short hauls and kept within sight of the coastline. But other shipwrecks as well as evidence from the Attic Orators seem to indicate that the typical capacity of merchant vessels that traveled over long distances on the open sea was some 80 tons.

Many of the goods traded throughout ancient Greek history were luxury goods, manufactured items, such as jewelry and finely painted vases, as well as specialty agricultural products like fine wine and honey. Necessities were also traded, however, for without long-distance trade, many Greek cities would not have been able to obtain metals, timber, wine, and slaves. One of the most extensively traded necessity items was grain, which came to Athens typically from the Black Sea region, Thrace, and Egypt. According to the orator, Demosthenes, Athens imported some 400,000 medimnoi (approximately 4,800,000 liters) of grain per year in the late fourth century from the Crimean kingdom of the Bosporus alone.

Chiefly because of the need for certain imports, such as grain and timber, and for revenue drawn from taxes on trade, many cities did have an interest and involvement in overseas trade. Athens in particular made laws that prohibited the export of grain produced in Athens and required that loans on trading ventures be for cargoes of grain and that ships bringing grain into the Piraeus sell one-third of it on the spot and the remaining two-thirds in Athens. Athens also instituted special courts to expedite the adjudication of disputes involving traders, granted honors and privileges to anyone who performed extraordinary services relating to trade for the city, and made agreements with other states to obtain favorable conditions for those bringing grain to Athens.

In all the aforementioned examples Athens’ chief interest was to supply itself with imported grain so that its citizenry could obtain food at reasonable prices. Athens was not particularly concerned with helping traders and enhancing their profits per se or in obtaining a trade surplus or to protect home produced goods against imported foreign ones. To this extent, then, the Finley model holds true, even if it is clear that the Athenian state recognized that its interests were complementary with those of foreign traders and, thus, had to help them in order to help itself.

Moreover, it does appear that Athens had some concern about its home produced products as well, at least in the case of silver. Xenophon, an Athenian writer from the fourth century, noted that Athens could always be assured of traders bringing their goods into Athens, because traders knew they could always get a valuable trade commodity, namely silver in the form of Athenian coinage, in exchange. To ensure the demand for its silver, Athens took great care to maintain the reputation of its coinage for high quality and to associate that reputation with a familiar design that went unchanged for several centuries. Such a policy attests to a state interest in production and exports, at least in this sector of the economy.

Athens was also motivated to encourage trade to obtain revenue from taxes. Both transient and resident foreigner traders had to pay poll taxes in Athens that citizens did not. Athens also had various port, transit, and market taxes that would benefit by increased trade, including a two percent tax on all imports and exports.

Money and Banking

With few exceptions (Sparta being the most famous), the Greeks of the Classical period had a thoroughly monetized economy employing coinage whose value was based on precious metals, principally silver. The value of the coinage was commensurate to the value of the precious metal it contained with a small mark-up, since the value of the metal was guaranteed by its issuing state. The tie of the Greek monetary system to the supply of precious metals limited the ability of governments to influence their economies through the manipulation of their money supplies. However, we do know of cases when states debased their coinages for such purposes.

Ancient Greek coins are similar in appearance to modern ones. But like other manufactured products in ancient Greece, they were made by hand. A blank metal circular “flan” was placed on an obverse die that rested on an anvil and then was struck with a hammer bearing a reverse die. The nature of the process naturally produced coins in which the image was often poorly centered on the flan. Nevertheless, the issuing authority, usually a government, was clear as the designs or “types” of the coins expressed an image symbolic of the issuing authority and were often augmented by a “legend” of letters that spelled out an abbreviation of the issuing authority’s name.

Coinage was issued in a variety of denominations and weight standards by various city-states. The chief weight standards of the Classical period were the Attic, Aeginetan, Euboiic, and Corinthian. The basis of the Attic standard was the silver tetradrachm of 17.2 grams, which retained the design of the head of Athena on the obverse and her symbolic owl on the reverse throughout the Classical period. It was the most widely circulated coinage during this time and appears in large numbers of hoards found throughout the Greek world and beyond. This was due not only to the far reach of Athenian trade, but also to Athenian imperialism. Athens used its coinage to pay for its military operations abroad and even issued the “Standards Decree,” which for a few decades of the fifth century required the many cities of the Aegean Sea under its control to discontinue their local types and use only Athenian coinage. The local coinage had to be turned in, melted down, and re-struck as Athenian coinage for a fee. Unlike that of Athens, most city-states’ coinages circulated only locally. When such local issues were taken abroad, they were probably treated as bullion, as can be inferred from test-cuts often found on them.

A recent debate among scholars concerns the degree to which coinage was an economic or a political phenomenon in the ancient Greek world. Finley’s model, of course, holds that coinage had strictly political functions. Finley believed that coinage was merely a tool designed to reinforce and project a city-state’s civic identity. States minted coins not to facilitate economic transactions among their citizens, but merely for state purposes so that, for example, it had a convenient medium through which to collect taxes or make state expenditures. Athens’ “Standards Decree” was not undertaken for economic gain, but for political purposes to facilitate tribute payments and to show Athens’ subjects who was boss.

But here again Finley goes too far. Although the type of a Greek coin certainly expressed political symbols and could, therefore, serve as a political tool, such symbolism was largely lost on people who used the coins in places like Egypt, the Levant, Asia Minor, and Mesopotamia, where hoards of Greek coins have been found in abundance. The fact that they could use the coins independently of their original political context (and for what else besides economic purposes then?) is a good reason to believe that the Greeks could do so as well. Moreover, as Henry Kim has recently argued, the minting of large quantities of small-denomination coinage from the outset in Greece shows that the state did have a concern for the wide use of coinage at the micro-level by common people in day-to-day economic exchanges, not just for large-scale public and political purposes.

Nevertheless, one of the most active areas of research on ancient Greek money and coinage today concerns its representational nature and place within sectors other than the economy, including religion, society, and politics. Both Leslie Kurke and Sitta von Reden have argued that the advent of a monetized economy employing coinage need not have undermined traditional values or led to a disembedding of the economy. Rather, the symbolic aspect of coinage could be manipulated to reinforce traditional social and religious practices that were non-economic in the modern sense. In her analysis of the poetry of Pindar, for example, Kurke argues that the poet re-embedded money within traditional social values, thereby allowing the landed aristocratic elite to embrace money and its potential for de-personalizing social interactions without discarding the old social ties and values that bolstered their privileged place in society. Although von Reden believes that the use of coinage arose within an embedded economic context and, therefore, did not have to be re-embedded, she has argued that coinage and other forms of money did not have an intrinsically economic use or meaning in ancient Greece, but rather multiple meanings that were determined by the context within which they were used, which could be social, religious, or political as well as economic.

Given that the ancient Greeks did have a monetized economy, it is not surprising that they also developed banking and credit institutions. It is generally agreed that at the very least, bankers, who were metics as a rule (note Pasion and Phormion above), performed various functions from money-changing to securing deposits in cash and other assets. The question whether bankers lent out money deposited by others at interest, however, is the subject of some debate. Paul Millett, a student of Finley, not surprisingly argues in his book, Lending and Borrowing in Ancient Athens, that bankers did not loan out other peoples money for interest and he formulates a model in which lending and borrowing were predominantly done for consumptive purposes and, therefore, thoroughly embedded in traditional social relations. In contrast, Edward Cohen’s book, Athenian Economy and Society: A Banking Perspective, employs a close philological analysis of the evidence in his assertion that productive lending and borrowing, divorced from concerns for personal relationships, were common in Classical Athens and that bankers did indeed lend out deposited money at interest. Although Millett may be right that much of the lending and borrowing in Athens was for consumptive purposes, particularly those secured by landed property, it is hard to deny that the evidence of productive lending and borrowing from banking practices, numerous maritime loans, and even temple loans in the Classical period constitute something more than just exceptions to the rule.

Economic Changes during the Hellenistic Period

In large part owing to the Near Eastern conquests of Alexander the Great, but also because of social and economic changes that had already been occurring during the Classical period, the economy of the Hellenistic period (323-30 B.C.) grew immensely in scale. The Finley model is probably right in general to hold that the essentially consumptive nature of the economy in the traditional Greek homelands changed little during this time. But it is clear that there were significant innovations in some places and sectors on account of the collision and fusion of Greek notions of the economy with those of the newly won lands of the Near East. Thus, we see greatly increased government control over the economy, as evidenced most strikingly in the surviving papyrus records of the Greek Ptolemaic dynasty that ruled Egypt.

A large percentage of the land and, therefore, agriculture, was controlled by the Greek royal dynasties that ran the Hellenistic kingdoms. Peasants whose status lay somewhere between slave and free not only worked the king’s lands, but were also often required to labor on other royal projects. The Ptolemies of Egypt dominated agriculture to such an extent that they instituted an official planting schedule for various crops and even loaned out the tools used by farmers on state-owned lands. Almost all produce from these estates was turned over to the government and redistributed for sale to the population. Some crown lands, however, were assigned to government officials or soldiers and though technically still the property of the state, they often came to be treated as de facto private property.

The Ptolemaic state also involved itself in various manufacturing processes, such as olive oil production. Not only were the olives cultivated on state-controlled lands by peasant labor, but the oil was extracted by contracted labor and sold at the retail level by licensed dealers at fixed prices. However, the state probably had no intention to improve efficiency or to provide better quality olive oil at lower prices to its citizens. The Ptolemies instituted a tax on imported olive oil of 50 percent that was essentially a protective tariff. The goal of the government seems to have been to protect the profits of its state-run business.

Yet for all its interference in the economy, the Ptolemaic government did not assemble a state merchant fleet and instead contracted with private traders to transport grain to and from public granaries. It also left it up to private traders to import the few goods that Egypt needed from abroad, including various metals, timber, horses, and elephants, all of which were essential for the Ptolemies’ standing mercenary army and fleet. But although the Ptolemies also exported wheat and papyrus, for the most part, the economy of Egypt was a closed one. Unlike the other Hellenistic kingdoms, Egypt minted coins on a lighter standard than the Attic one universalized by Alexander the Great. Moreover, in 285, the Ptolemies barred the use of foreign coins in Egypt and required them to be turned in to government officials, melted down, and re-minted as Egyptian coinage for a fee. Although Egypt controlled gold mines in Nubia, it did not produce silver and had chronic shortages of silver coins for daily transactions. Thus, many exchanges were performed in kind rather than in cash, even though value was always expressed in cash equivalents.

Despite its chronic shortages of silver coins and its closed coinage system, Egypt still had a coin-based economy largely because of Alexander the Great, who flooded the economies of the eastern Mediterranean with coins and monetized some places in the Near East for the first time. Along with coinage, Greek banking practices also made their way into these areas. Thus, the general scale of economic activities increased as large kingdoms of the Near East and the Greek mainland and islands became more interconnected. Although this was offset to some degree by political instability and warfare during the Hellenistic period, in general we do see economic activity on a larger scale and increased specialization as some places, such as Tyre and Sidon in Phoenicia, became renowned for particular products, in this case purple dye and glassware respectively. Moreover, thousands of amphorae whose handles were stamped with names of issuing magistrates have been found that, if nothing else, reveal a very high volume of pottery production and may also allow scholars some day to reconstruct in more detail other aspects of the economy, such as agricultural production, land tenure, and trade patterns.

The Hellenistic period is known for its technological innovation and some new technologies did have an impact on the economy. Archimedes’ screw-like pump was used to remove water from mines and to improve irrigation for agriculture. In addition, new varieties of wheat and the increased use of iron ploughs improved yield while better grape and olive presses facilitated wine and oil production. Unfortunately, some of the most impressive technological innovations of the Hellenistic period, such as Heron’s steam engine, were never applied in any significant way. Thus, most production continued to be low tech and labor intensive.

All in all, then, although the scale of the economy increased during the Hellenistic period, consumption still seems to have been the primary goal. Technology was not applied as much as it might have been to increase production. States were much more involved in economic affairs, both in controlling production and in collecting taxes on countless items and activities, but mostly just to extract as much revenue from them as possible. The revenue was spent in turn in royal benefactions (euergetism), but mostly only for ostentatious display that threw money into non-productive sink holes.

Conclusion

The foregoing survey shows that the Finley model provides a reasonable, if simplified, general picture of the ancient Greek economy. Overall, the ancient Greek economy was very different from our own. It was much smaller in scale and differed in quality as well, since it generally lacked the productive growth mentality and the interconnected markets that are so characteristic of most of the world economy today. With regard to the details, however, recent studies are showing that the Finley model does at least need to be revised. As more research is done, it may even be necessary to replace the Finley model altogether in favor of one that fits the evidence better. In the meantime, though, we can still use Finley’s model as a basic description while being careful to acknowledge the contradictory evidence provided by recent studies and continuing to investigate the various sectors of the ancient Greek economy at various times and places.

Select Annotated Bibliography

The bibliography on the ancient Greek economy is enormous and it would be counterproductive to list all works here. Therefore, I list only a selection of the essential primary and secondary works, preferring more recent works in English for the sake of students. Further and more specialized works may be found within the bibliographies of the works listed below.

Primary Sources

Literary Works

Many of the literary works listed below are available in the Loeb Classical Library and Penguin Classics series in English translations.

Aristotle, Politics (particularly 1.1258b37-1.1259a5)

In his study of the polis, Aristotle devotes this section to modes of acquisition and criticizes what we would call “capitalism.”

[Aristotle], Oikonomikos (Economics – “household management”)

Book 2 shows how states obtain revenues. The methods are largely coercive, not productive, such as cornering the market in grain during a famine, debasing coinage, etc.

Demosthenes and [Demosthenes], speeches

Especially useful are several speeches for lawsuits involving economic matters.

Hesiod, Works and Days

A poem containing advice and attitudes about farming in the early Archaic period, c 700 B.C.

Homer, Iliad and Odyssey

Two great epic poems with much information about economic practices at the outset of the Archaic period, c. 800-750 B.C.

Isokrates, speeches (especially Trapezitikos and On the Peace)

On the Peace argues for economic activity rather than warfare as a means of obtaining revenues for the state. Trapezitikos concerns a lawsuit involving trade and banking.

Lysias, speeches (especially On the Grain Retailers)

Plato, Republic and Laws

These two dialogues concern the organization of the polis. Although the Republic represents the ideal city-state and the Laws presents a more realistic picture, both betray an elitist disdain for non-landed economic activities.

Xenophon, Oikonomikos (Economics – “household management”) and Poroi (Revenues)

Two extended essays on household management and the means by which the state may obtain more revenues, respectively. The latter is one of the most important documents concerning state interests in trade and mining.

[Xenophon] “The Old Oligarch” (or “Constitution of the Athenians”)

This is an anonymous mid-fifth-century B.C. political pamphlet that argues that the life-blood of Athenian democracy is the economic exploitation of the so-called “allies” of Athens.

Collections of Primary Sources: Documentary, Epigraphic, and Material

Burstein, S.M. The Hellenistic Age from the Battle of Ipsos to the Death of Kleopatra VII. Cambridge: Cambridge University Press, 1985.

A collection of documents, including inscriptions, translated into English.

Fornara, C.W. From Archaic Times to the End of the Peloponnesian War, second edition. Cambridge: Cambridge University Press, 1983.

A collection of documents, including inscriptions, translated into English.

Harding, P. From the End of the Peloponnesian War to the Battle of Ipsus. Cambridge: Cambridge University Press, 1985.

A collection of documents, including inscriptions, translated into English.

Meijer, F. and O. van Nijf. Trade, Transport, and Society in the Ancient World. New York and London: Routledge, 1992.

A sourcebook of documents translated into English.

Thompson, M., O. Mørkholm, and C.M. Kraay, editors. An Inventory of Greek Coin Hoards. New York: American Numismatic Society, 1973.

Essential listing of all discovered hoards of ancient Greek coins up to 1973.

Wiedemann, T. Greek and Roman Slavery. Baltimore: Johns Hopkins University Press, 1981.

Excellent collection of documents on Greek and Roman slavery translated into English.

Secondary Sources

General Works and Surveys

Austin, M.M. and P. Vidal-Naquet. Economic and Social History of Ancient Greece. Berkeley: University of California Press, 1977.

Provides both a survey of the subject and excerpts from the primary sources of evidence. It adheres to the Finley model in general.

Austin, M.M. 1988. “Greek Trade, Industry, and Labor.” In Civilization of the Ancient Mediterranean: Greece and Rome, volume 2, edited by M. Grant and R. Kitzinger, 723-51. New York: Scribner’s.

Often insightful overview of the ancient Greek economy primarily from the Finley perspective.

Cambridge Ancient History (CAH), second edition. Several volumes. Cambridge: Cambridge University Press.

The standard encyclopedia of ancient history with entries on various subjects, including the ancient Greek economy at different periods, by leading scholars.

Finley, M. I. The Ancient Economy, second edition. Berkeley: University of California Press. 1985. (Now available in an “Updated Edition” with a foreword by Ian Morris. Berkeley: University of California Press, 1999.)

The most influential book on the subject since its initial publication in 1973. It takes a synchronic approach to the Greek and Roman economies and argues that they cannot be analyzed or understood in terms appropriate for modern economic analysis. In general, the ancient Greek economy was “embedded” in “non-economic” social and political values and institutions. Heavily influenced by Weber, Hasebroek, and Polanyi.

Hasebroek, J. Trade and Politics in Ancient Greece. Translated by L.M. Fraser and D.C. MacGregor. Reprint. London, 1933. (Originally published as Staat und Handel im alten Griechenland [Tübingen, 1928].)

A classic that greatly influenced Finley.

Hopper, R.J. Trade and Industry in Classical Greece. London: Thames and Hudson, 1979.

Survey of various aspects of the ancient Greek economy in the Classical period.

Humphreys, S.C. “Economy and Society in Classical Athens.” Annali della Scuola Normale Superiore di Pisa 39 (1970):1-26.

An important survey that also argues for focused studies on individual sectors of the ancient Greek economy at particular times and places.

Lowry, S.T. “Recent Literature on Ancient Greek Economic Thought.” Journal of Economic Literature 17 (1979): 65-86.

Michell, H. The Economics of Ancient Greece, second edition. Cambridge: W. Heffer, 1963.

Slightly dated, but useful survey.

Morris, Ian. “The Ancient Economy Twenty Years after The Ancient Economy.” Classical Philology 89 (1994): 351-366.

Excellent survey of new approaches to the study of the ancient Greek and Roman economies since Finley, to whose model the author is generally sympathetic.

Oxford Classical Dictionary (OCD), third revised edition, edited by S. Hornblower and A. Spawforth. Oxford: Oxford University Press, 2003.

Includes brief entries by leading scholars on various aspects of the ancient Greek economy.

Pearson, H.W. “The Secular Debate on Economic Primitivism.” In Trade and Market in the Early Empires, edited by K. Polanyi, C.M. Arensberg, and H.W. Pearson, 3-11. Glencoe, IL: Free Press, 1957.

A concise statement of the influential ideas of Karl Polyani about the ancient Greek economy.

Rostovtzeff, M. The Social and Economic History of the Hellenistic World. Oxford: Oxford University Press, 1941.

Monumental “modernist” approach to a wealth of archaeological evidence about the economy during the Hellenistic period.

Samuel, A.E. From Athens to Alexandria: Hellenism and Social Goals in Ptolemaic Egypt. Lovanii, 1983.

A good survey with an important discussion of ancient Greek attitudes toward economic growth.

Starr, C.G. The Economic and Social Growth of Early Greece, 800-500 B.C. Oxford: Oxford University Press, 1977.

Modernist survey.

Weber, M. Economy and Society. Translated by E. Fischoff et al. Edited by G. Roth and C.

Wittich. Berkeley: University of California Press, 1968. (Originally published as Wirtschaft und Gesellschaft [Tübingen, 1956].)

A classic that greatly influenced Hasebroek and Finley.

Collections

Archibald, Z.H., J. Davies, and G. Oliver. Hellenistic Economies. London: Routledge, 2001.

Collection of articles that take the study of the economy in the Hellenistic period beyond Rostovtzeff.

Cartledge, P., E.E. Cohen, and L. Foxhall. Money, Labour, and Land: Approaches to the Economies of Ancient Greece. London: Routledge, 2002.

Finley, M.I. Economy and Society in Ancient Greece. Edited by B.D. Shaw and R.P. Saller. New York: Viking, 1982.

Garnsey, P. Non-Slave Labour in the Graeco-Roman World. Cambridge: Cambridge Philological Society, 1980.

Garnsey, P., K. Hopkins, and C.R. Whittaker. Trade in the Ancient Economy. Berkeley: University of California Press, 1983.

A collection of articles along Finley lines.

Mattingly, D.J. and J. Salmon. Economies beyond Agriculture in the Classical World. London: Routledge, 2001.

A collection of articles that focuses on the non-agrarian sectors of the ancient Greek and Roman economies with a mind to revising the Finley model.

Meadows, A. and K. Shipton. Money and Its Uses in the Ancient Greek World. Oxford: Oxford University Press, 2001.

A collection of articles on the use of money and coinage in ancient Greece.

Parkins, H. and C. Smith. Trade, Traders, and the Ancient City. London: Routledge, 1998.

Scheidel, W. and S. von Reden. The Ancient Economy. London: Routledge, 2002.

An excellent collection of some of the most important articles on the ancient Greek and Roman economy from the last 30 years with a helpful introduction, notes, and glossary. Especially useful is their “Guide to Further Reading,” pp. 272-278.

Specialized Works

Brock, R. “The Labour of Women in Classical Athens.” Classical Quarterly 44 (1994): 336-346.

Burke, E.M. “The Economy of Athens in the Classical Era: Some Adjustments to the Primitivist Model.” Transactions of the American Philological Association 122 (1992): 199-226.

A good argument that attempts to adjust the Finley model.

Carradice, I. and M. Price. Coinage in the Greek World. London: Seaby, 1988.

A brief, accessible survey.

Cohen, E. E. Athenian Economy and Society: A Banking Perspective. Princeton: Princeton University Press, 1992.

A close philological study of the evidence for banking practices in Classical Athens that argues for a disembedded economy with productive credit transactions.

Engen, D.T. Athenian Trade Policy, 415-307 B.C.: Honors and Privileges for Trade-Related Services. Ph.D. dissertation, UCLA, 1996. (This dissertation is currently being revised for publication as a book tentatively entitled, Honor and Profit: Athenian Trade Policy, 415-307 B.C.E.)

Examines Athenian state honors for those performing services relating to trade and argues for a revision of some aspects of the Finley model.

Engen, D.T. “Trade, Traders, and the Economy of Athens in the Fourth Century B.C.E.” In Prehistory and History: Ethnicity, Class, and Political Economy, edited by David W. Tandy, 179-202. Montreal: Black Rose, 2001.

Argues for the diversity of those responsible for trade involving Classical Athens.

Engen, D.T. “Ancient Greenbacks: Athenian Owls, the Law of Nikophon, and the Ancient Greek Economy.” Historia, forthcoming(a).

Argues that the numismatic policies of Athens may indicate a state interest in exports.

­­­­­Engen, D.T. “Seeing the Forest for the Trees of the Ancient Economy.” Ancient History Bulletin, forthcoming(b).

A review article of Meadows and Shipton, 2001, and Scheidel and von Reden, 2002, that argues for the mutual compatibility of broad and detailed studies of the ancient Greek and Roman economies.

Finley, M.I. The World of Odysseus, revised edition. Harmondsworth: Penguin, 1965.

A brief and highly readable survey of the early Archaic period.

Fisher, N.R.E. Slavery in Classical Greece. London: Bristol Classical Press, 1993.

A brief survey.

Garlan, Y. Slavery in Ancient Greece, revised edition. Ithaca: Cornell University Press, 1988.

The standard survey of slavery in ancient Greece.

Garnsey, P. Famine and Food Supply in the Greco-Roman World. Cambridge: Cambridge University Press, 1988.

Examines private and public strategies to ensure food supplies.

Isager, S. and J.E. Skydsgaard. Ancient Greek Agriculture: An Introduction. London: Routledge, 1992.

Kim, H.S. “Archaic Coinage as Evidence for the Use of Money.” In Money and Its Uses in the Ancient Greek World, edited by A. Meadows and K. Shipton, 7-21. Oxford: Oxford University Press, 2001.

Argues that the existence of large quantities of small-denomination coins from the earliest of coinage in ancient Greece is evidence of the economic use of coinage.

Kraay, C.M. Archaic and Classical Greek Coins. Berkeley: University of California Press, 1976.

Long the standard survey of ancient Greek coinage.

Kurke, L. The Traffic in Praise: Pindar and the Poetics of Social Economy. Ithaca: Cornell University Press, 1991.

Takes the new cultural history approach to analyzing the poetry of Pindar and how it represents money within the social and political value system of ancient Greece.

Kurke, L. Coins, Bodies, Games, and Gold: The Politics of Meaning in Archaic Greece, 1999. Princeton: Princeton University Press.

Millett, P. Lending and Borrowing in Ancient Athens. Cambridge: Cambridge University Press, 1991.

Reinforces the Finley model by arguing that lending and borrowing was primarily for consumptive purposes and embedded among traditional communal values in Athens.

Osborne, R. Classical Landscape with Figures: The Ancient Greek City and Its Countryside. London: George Philip, 1987.

Explores rural production and exchange within political and religious contexts.

Sallares, R. The Ecology of the Ancient Greek World. London: Duckworth, 1991.

Interdisciplinary analysis of a massive amount of information on a wide variety of aspects of the ecology of ancient Greece.

Schaps, David M. The Invention of Coinage and the Monetization of Ancient Greece. Ann Arbor: University of Michigan Press, 2004.

Shipton, K. “Money and the Elite in Classical Athens.” In Money and Its Uses in the Ancient Greek World, edited by A. Meadows and K. Shipton, 129-44. Oxford: Oxford University Press, 2001.

Argues that the elite of Athens preferred leasing high-profit silver mines to public land.

Tandy, D. Warriors into Traders: The Power of the Market in Early Greece. Berkeley: University of California Press, 1997.

Traces developments in the economy of the Archaic period and argues that they had an important impact in the formation of the basic social and political institutions of the polis.

Von Reden, S. Exchange in Ancient Greece. London: Duckworth. 1995.

Employs the methods of new cultural history to argue that exchange in ancient Greece was thoroughly embedded in non-economic social, religious, and political institutions and practices.

Von Reden, S. “Money, Law, and Exchange: Coinage in the Greek Polis.” Journal of Hellenic Studies 107 (1997): 154-176.

A cultural historical study of the representational uses of coinage in the social, political, and economic life of ancient Greece at the advent of the use of coinage.

White, K.D. Greek and Roman Technology. London: Thames and Hudson, 1984.

1 Portions of this article have or will appear in other forms in Engen, 1996, Engen, 2001, Engen, Forthcoming(a), and Engen, Forthcoming(b).

2 This article will not discuss the preceding Mycenaean period (c. 1700-1100 B.C.) and “Dark Age” (c. 1100-776 B.C.E.). During the Mycenaean period, the ancient Greeks had primarily a Near Eastern style palace-controlled, redistributive economy, but this crumbled on account of violent disruptions and population movements, leaving Greece largely in the “dark” and the economy depressed for most of the next 300 years.

Citation: Engen, Darel. “The Economy of Ancient Greece”. EH.Net Encyclopedia, edited by Robert Whaples. July 31, 2004. URL http://eh.net/encyclopedia/the-economy-of-ancient-greece/

U.S. Agriculture in the Twentieth Century

Bruce Gardner, University of Maryland

Considering that the basic facts about twentieth century agriculture are not seriously in dispute, it is surprising how differently they are seen by different observers. One constellation of views sees the farm sector as a chronically troubled place, with farmers typically hard pressed to survive economically and continually decreasing in number. Moreover, pessimistic assessments see unwelcome trends developing over recent years, with methods of farm production environmentally suspect, farm laborers exploited, the wealth farming generates increasingly concentrated on relatively few large farms, and billions of dollars taxed from the general public for the benefit principally of those large farms. Some economists have argued that even large commercial farms constitute a sector in decline (see Blank, 1998).

An alternative constellation of views is more optimistic. It focuses on the increased acreage and output of the average farm, the sustained growth of agricultural productivity even through the general productivity slump of the 1980s, the substantial improvements in income and wealth of commercial farmers, the predominant role of the United States in world commodity markets, and American leadership in supplying both technological innovation and food aid for the developing world. As Heady (1976) put the case, “the U.S. has had the best, the most logical and the most successful program of agricultural development of any country in the world” (p. 77).

Basic Facts and Trends

The generally accepted facts include[1]:

Rising Productivity

Between 1930 and 2000 U.S. agricultural output approximately quadrupled, while the United States Department of Agriculture’s (USDA) index of aggregate inputs (land, labor, capital and other material inputs) remained essentially unchanged. Thus, multifactor productivity (output divided by all inputs) rose by an average of about 2 percent annually over this period. This rate substantially exceeds the rate of multifactor productivity growth in manufacturing, and the agricultural rate did not experience the slowdown that occurred in the rest of the U.S. economy during the last quarter of the century.

Falling Real Prices

Prices received by farmers for products they sell decreased by an average of 1 percent annually in real (inflation-adjusted) terms between 1900 and 2000. Real food prices paid by consumers also decreased. The percentage of U.S. disposable income spent on food prepared at home decreased, from 22 percent as late as 1950 to 7 percent by the end of the century.

Declining Number of Farms

The number of farms decreased from a peak of close to 7 million in the mid-1930s to just over 2 million in 2000. The rate of decline was most rapid in the 1950s and 1960s, and dropped off thereafter until the 1990s, when the number stayed about constant. The U.S. had an estimated 2.16 million farms in 2002 as compared to 2.11 million in 1992 (USDA, 2003, p. 2).

Rising Relative Farm Household Income

Average farm household income was substantially lower than the nonfarm average during almost the whole of the century, but between the end of World War II and the mid-1960s had risen to about 70 percent of the nonfarm level, and continued to rise thereafter until achieving parity or slightly more in the 1990s. The principal cause of this increase in average income was a rise in earnings from off-farm employment of farmer operators and farm family members. By the 1990s a substantial majority of farm household income came from off-farm sources.

Increased Concentration of Production on Large Farms

Agricultural production has become highly concentrated on large farms. In 1930, when the Agriculture Census first asked about the value of farm product sales from each farm, sales per farm in the largest 10 percent of farms were 14 times the sales per farm of the smallest 10 percent. By 1992, sales in the largest 10 percent were 152 times sales in the smallest 10 percent the largest 10 percent of farms accounted for 70 percent of all farm product sales, while at the lower end, half of all farms accounted for only 2 percent of product sales. Large farms, those with more than $250,000 of annual sales by USDA’s definition, are wealthy. Their assets, mostly land owned, had a mean value of $1.8 million according to the 1997 Census of Agriculture, which with $0.4 million average debt means a mean net worth of $1.4 million per farm.

Explanations

The driving forces behind these events that have received most attention are technological progress in farming and nonfarm economic development. Technological progress in farming results in less input required per unit output, fewer and larger farms, and lower costs of production. With competition in product markets, lower costs mean lower commodity prices. Nonetheless, returns to labor in commercial agriculture have been maintained and even increased through the opportunities provided by rising nonfarm real wages. In an “integrated” labor market, worker mobility between sectors equates wages for comparable labor in farm and nonfarm work. The integration is not only between rural and urban employment at a given location, but also between sections of the country. In 1910 farm wage rates in the Pacific Coast states were almost 3 times the level of farm wages in the South. By 1997 the difference was only 10 percent (Gardner, 2002, p. 173). For farm operator households, the USDA estimates that in 2000 mean household income was $62,000 compared to $57,000 for nonfarm households. But over 90 percent of farm household income was estimated to have come from off-farm sources (USDA 2002, p. 54).

Again, there are two different interpretations of the facts. The pessimistic view is that off-farm jobs are taken out of desperation to cushion the blow of inadequate returns from farming, and that the increasing importance of such jobs reflects the increasingly precarious status of small farms. The optimistic view is that the increase in off-farm work was a response to its greater availability, as commuting became easier and nonfarm industries moved into rural areas, and that this has become a means for farmers to enjoy the desirable aspects of farm living without having to subsist on an income well below the U.S. household average. In 1997, as estimated in the Census of Agriculture, 1.2 million (59 percent) of farms had sales of less than $20,000, so even if they had zero costs, their net farm incomes would have been less than half the median U.S. household income.

Evidence for the optimistic interpretation is that the decline in farm numbers stopped in the 1990s, indicating that off-farm income is not a means of postponing small-farm business failure, but rather a long-term means of small-farm survival. The pessimistic response is that those farms may be surviving but their operators are stressed and unhappy. A similar divergence of interpretation pertains to large farms. On average they are wealthy, with incomes well above those of the average U.S. household. But the large farmer’s situation is not an economic idyll. Their incomes are variable, subject to vagaries of weather and markets, and several thousand face financial failure every year. A balanced assessment, incorporating both economic information and surveys of farmers’ views of the broader situation of their farms and communities, is that of Danbom (1995), which concludes on a guardedly optimistic note.

A complicating factor is economic instability in the agricultural economy. The trend of decreasing real farm prices has not been steady, and most notably was punctuated by price spikes during three periods in which the annual average of USDA’s index of prices received by farmers remained well above the long-term trend (1917-19, 1943-48, and 1973-74 (Figure 1). High-price periods have led farmers to take on debt and invest to an extent which has proven unsustainable, particularly in driving up land prices. This has led to periods of widespread financial distress in farming. The “farm crisis” of the 1980s is the most recent example.

Role of Government

Since the Great Depression, the fate of hard-working farmers facing low prices has drawn a governmental response in the form of commodity support programs. Even earlier, governmental involvement in the form of investment in rural roads, irrigation works, utilities, agricultural research, and education was important in farm productivity growth. From the Progressive Era of the early twentieth century, federal and state regulation has attempted to increase the market power of farmers, reduce that of processors and suppliers of farm inputs, protect food quality and safety, and provide public services such as market information and improved soil conservation and environmental quality. The extent to which governmental activity has generated benefits that exceed the costs is a matter of controversy in every area. Best accepted have been activities in research, education, and food quality and safety regulation. Most central in political debate, most costly to taxpayers, and most controversial have been commodity programs.

Commodity Support Programs

Commodity support programs have aimed to boost farmers’ receipts from commodity production in all but the highest-price years. The bulk of support has gone to the main traditional crops (grains, cotton, peanuts, tobacco) and milk; other livestock products and most fruit and vegetable crops have received only sporadic and small-scale support. Between the 1930s and the 1960s, the main mechanisms of support involved increasing the U.S. market prices of these commodities, through government purchases, supply controls, import restrictions, or export promotion. Since the 1960s, the support mechanism has increasingly been government subsidy payments made directly to farmers. From the 1930s through the 1950s, annual government payments to farmers averaged about $3 billion (in 1996 dollars). In the 1980s these payments averaged about $11 billion. In 1998-2001 they averaged $20 billion (USDA 2002, p. 54, adjusted to 1996 dollars).

Supply Reduction Programs versus Subsidies

The increase in payments does not indicate an increase in governmental direction of U.S. agriculture. The supply management programs of earlier decades had bigger market effects; indeed, the mechanism by which they supported farm income was principally by holding up the prices paid by buyers of farm products. A key reason these programs fell from favor politically is the belief that supply controls created a world market price umbrella under which other countries, most notably in Latin America, expanded their own crop acreages and reduced the demand for U.S. exports. Subsidy payments not tied to acreage reductions will instead tend to increase U.S. output and thus drive down both U.S. and world prices. Some of the strongest objectors to recent U.S. farm programs have, for this reason, been representatives of foreign agricultural producers. However, the U.S. programs have evolved over time to be less and less tied to production decisions. This “decoupling” of payments reached its peak in the Farm Act of 1996, which replaced the former “deficiency payment” program with payments that were fixed for each farmer based on the farm’s past receipt of payments. This system of payments was argued to provide little if any incentive to produce, since if a grower increased production the payments did not rise, and if a grower decreased production they did not fall. It has been argued that the main economic effect of the payments is to increase the value of cropland to which the payments are tied (for discussion see chapters in Tweeten and Thompson, 2002).

Role of Markets

Despite the salience of commodity programs in public perceptions of U.S. agriculture, the majority of farm output (by value) has no price support or other direct market intervention. Even for the program crops, it is arguable that their production history over the longer term has been little influenced by commodity programs. Market conditions, according to this view, have been more important in determining the product mix, land, labor and other inputs used, as well as innovations in production and the economic organization of farming. Throughout the twentieth century the sector remained a reasonably close approximation of the competitive supply-and-demand model.

Impact of Technological Progress and Competitive Markets

Consequently, the explanations outlined above can be well understood in basic supply-demand terms. Technological progress reduced the cost of producing farm products, and profit-seeking farmers therefore adopted the innovations embodying new technology. Competition ensured that the resulting profits were squeezed out of farmers’ hands and accrued largely to buyers of those products, with a consequent decrease in consumers’ real costs of food. Returns to farm labor, land, and capital investment were governed by changes in demand generated by technological innovation, buyers’ responses to lower prices (notably the responses of foreign buyers, evident in increased agricultural exports), and the supply conditions of the factors of production (notably the availability of non-agricultural alternatives for labor, capital, and land).

Political Economy

Farmers as an interest group in the political arena have done well in achieving legislation providing support for commodity prices and returns, public investment in rural infrastructure, and exemption from some regulatory and tax burdens that have fallen on other business sectors. This is understandable under conditions of the 1930s, when farmers’ incomes were well below those of nonfarm people and they constituted 25 percent of the nation’s population But farmers’ political clout was more puzzling at the end of the century, when they constituted less than 2 percent of the population and on average had higher incomes and wealth than nonfarm people.

The Puzzle of Farmers’ Continued Political Clout

Disproportional representation of rural people in the U.S. Senate — inherent in a system where low-population rural states each have the same number of Senators as high-population urban states — is a source of political benefit. For many years the system of powerful authorizing and appropriations committees whose chairs were determined by seniority was seen as giving extraordinary power to long-serving Southerners with strong agricultural ties. But this advantage largely ended with the Congressional reforms of the 1960s and 1970s, so the trends in political institutions as well as economic and demographic evolution would appear to work against agriculture in the political arena. Yet outlays in support of agriculture were higher in real terms at the end of the twentieth century than at any earlier time. Why? Aspects of the situation that are likely to play a role are the organizational capability and cohesiveness of farm groups, their willingness to spend time and funds lobbying, and the general lack of serious opposition to farm interests. But an applicable and testable theory of farmers’ political influence remains out of reach. For discussion and analysis see, for example, Olson 1985, Winters, 1987, Browne 1988, Abler 1989, Swinnen and van der Zee 1993, and Orden, Paarlberg, and Roe 1999.

References

Abler, David. “Vote Trading on Farm Legislation in the U.S. House.” American Journal of Agricultural Economics 71(1989): 583-591.

Blank, Steven. The End of Agriculture in the American Portfolio. Westport, CT: Quorum Books, Greenwood Publishing Group, 1998.

Browne, William P. Private Interests, Public Policy, and American Agriculture. Lawrence, KS: University Press of Kansas, 1988.

Danbom, David B. Born in the Country: A History of Rural America. Baltimore: Johns Hopkins University Press, 1995.

Gardner, Bruce L. American Agriculture in the Twentieth Century: How It Flourished and What It Cost. Cambridge, MA: Harvard University Press, 2002.

Heady, Earl O. “The Agriculture of the U.S.” In Food and Agriculture, A Scientific American Book, pp. 77-86, San Francisco: W.H. Freeman, 1976.

Hurt, R. Douglas. Problems of Plenty: The American Farmer in the Twentieth Century. Chicago: Ivan R. Dee, 2002.

Olson, Mancur. “Space, Agriculture, and Organization.” American Journal of Agricultural Economics 67(1985): 928-937.

Orden, David, Robert Paarlberg, and Terry Roe. Policy Reform in American Agriculture. Chicago: University of Chicago Press, 1999.

Swinnen, Jo, and Frans A. van der Zee. “The Political Economy of Agricultural Policies: A Survey.” European Review of Agricultural Economics 20 (1993): 261-290.

Tweeten, Luther, and Stanley R. Thompson. Agricultural Policy for the 21st Century. Ames: Iowa State Press, 2002.

Winters, L.A. “The Political Economy of the Agricultural Policy of the Industrial Countries.” European Review of Agricultural Economics 14 (1987): 285-304.

United States Department of Agriculture. Agricultural Outlook, Economic Research Service, July-August, 2002.

United States Department of Agriculture. “Farms and Land in Farms.” National Agricultural Statistics Service, February 2003.

[1] Sources: Unless otherwise specified, see U.S. Department of Agriculture, Agricultural Statistics (annual) and Agricultural Outlook (tables at rear of each monthly publication).

Citation: Gardner, Bruce. “U.S. Agriculture in the Twentieth Century”. EH.Net Encyclopedia, edited by Robert Whaples. March 20, 2003. URL http://eh.net/encyclopedia/u-s-agriculture-in-the-twentieth-century/

Advertising Bans in the United States

Jon P. Nelson, Pennsylvania State University

Freedom of expression has always ranked high on the American scale of values and fundamental rights. This essay addresses regulation of “commercial speech,” which is defined as speech or messages that propose a commercial transaction. Regulation of commercial advertising occurs in several forms, but it is often controversial. In 1938, the Federal Trade Commission (FTC) was given the authority to regulate “unfair or deceptive” advertising. Congressional hearings were first held in 1939 on proposals to ban radio advertising of alcohol beverages (Russell 1940; U.S. Congress 1939, 1952). Actions by the FTC during 1964-69 led to the 1971 ban of radio and television advertising of cigarettes. In 1997, the distilled spirits industry reversed a six decade-old policy and began using cable television advertising. Numerous groups immediately called for removal of the ads, and Rep. Joseph Kennedy II (D, MA) introduced a “Just Say No” bill that would have banned all alcohol advertisements from the airways. In 1998, the Master Settlement Agreement between that state attorneys general and the tobacco industry put an end to billboard advertising of cigarettes. Do these regulations make any difference for the demand for alcohol or cigarettes? When will an advertising ban increase consumer welfare? What legal standards apply to commercial speech that affect the extent and manner in which governments can restrict advertising?

For many years, the Supreme Court held that the broad powers of government to regulate commerce included the “lesser power” to restrict commercial speech.1 In Valentine (1942), the Court held that the First Amendment does not protect “purely commercial advertising.” This view was applied when the courts upheld the ban of broadcast advertising of cigarettes, 333 F. Supp 582 (1971), affirmed per curiam, 405 U.S. 1000 (1972). However, in the mid-1970s this view began to change as the Court invalidated several state regulations affecting advertising of services and products such as abortion providers and pharmaceutical drugs. In Virginia State Board of Pharmacy (1976), the Court struck down a Virginia law that prohibited the advertising of prices for prescription drugs, and held that the First Amendment protects the right to receive information as well as the right to speak. Responding to the claim that advertising bans improved the public image of pharmacists, Justice Blackmun wrote that “an alternative [exists] to this highly paternalistic approach . . . people will perceive their own best interests if only they are well enough informed, and the best means to that end is to open the channels of communication rather than to close them” (425 U.S. 748, at 770). In support of its change in direction, the Court asserted two main arguments: (1) truthful advertising coveys information that consumers need to make informed choices in a free enterprise economy; and (2) such information is indispensable as to how the economic system should be regulated or governed. In Central Hudson Gas & Electric (1980), the Court refined its approach and laid out a four-prong test for “intermediate” scrutiny of restrictions on commercial speech. First, the message content cannot be misleading and must be concerned with a lawful activity or product. Second, the government’s interest in regulating the speech in question must be substantial. Third, the regulation must directly and materially advance that interest. Fourth, the regulation must be no more extensive than necessary to achieve its goal. That is, there must be a “reasonable fit” between means and ends, with the means narrowly tailored to achieve the desired objective. Applying the third and fourth-prongs, in 44 Liquormart (1996) the Court struck down a Rhode Island law that banned retail price advertising of beverage alcohol. In doing so, the Court made clear that the state’s power to ban alcohol entirely did not include the lesser power to restrict advertising. More recently, in Lorillard Tobacco (2001) the Supreme Court invalidated a state regulation on placement of outdoor and in-store tobacco displays. In summary, Central Hudson requires the use of a “balancing” test to examine censorship of commercial speech. The test weighs the government’s obligations toward freedom of expression with its interest in limiting the content of some advertisements. Reasonable constraints on time, place, and manner are tolerated, and false advertising remains illegal.

This article provides a brief economic history of advertising bans, and uses the basic framework contained in the Central Hudson decision. The first section discusses the economics of advertising and addresses the economic effects that might be expected from regulations that prohibit or restrict advertising. Applying the Central Hudson test, the second section reviews the history and empirical evidence on advertising bans for alcohol beverages. The third section reviews bans of cigarette advertising and discusses the regulatory powers that reside with the Federal Trade Commission as the main government agency with the authority to regulate unfair or deceptive advertising claims.

The Economics of Advertising

Judged by the magnitude of exposures and expenditures, advertising is a vital and important activity. A rule of thumb in the advertising industry is that the average American is exposed to more than 1,000 advertising messages every day, but actively notices fewer than 80 ads. According to Advertising Age (http://www.adage.com), advertising expenditures in 2002 in all media totaled $237 billion, including $115 billion in 13 measured media. Ads in newspapers accounted for 19.2% of measured spending, followed by network TV (17.3%), magazines (15.6%), spot TV (14.0%), yellow pages (11.9%), and cable/syndicated TV (11.9%). Internet advertising now accounts for about 5.0% of spending. By product category, automobile producers were the largest advertisers ($16 billion of measured media), followed by retailing ($13.5 billion), movies and media ($6 billion), and food, beverages, and candies ($6.0 billion). Beverage alcohol producers ranked 17th ($1.7 billion) and tobacco producers ranked 23rd ($284 million). Among the top 100 advertisers, Anheuser-Busch occupied the 38th spot and Altria Group (which includes Philip Morris) ranked 17th. Total advertising expenditures in 2002 were about 2.3% of U.S. gross domestic product (GDP). Ad spending tends to vary directly with general economy activity as illustrated by spending reductions during the 2000-2001 recession (Wall Street Journal, Aug. 14, 2001; Nov. 28, 2001; Dec. 12, 2001; Apr. 25, 2002). This pro-cyclical feature is contrary to Galbraith’s view that business firms use advertising to control or manage aggregate consumer demand.

National advertising of branded products developed in the early 1900s as increased urbanization and improvements in communication, transportation, and packaging permitted the development of mass markets for branded products (Chandler 1977). In 1900, the advertising-to-GDP ratio was about 3.1% (Simon 1970). The ratio stayed around 3% until 1929, but declined to 2% during the 1930s and has fluctuated around that value since then. The growth of major national industries was associated with increased promotion, although other economic changes often preceded the use of mass media advertising. For example, refrigeration of railroad cars in the late 1870s resulted in national advertising by meat packers in the 1890s (Pope 1983). Around the turn-of-the-century, Sears Roebuck and Montgomery Ward utilized low-cost transportation and mail-order catalogs to develop efficient systems of national distribution of necessities. By 1920 more Americans were living in urban areas than in rural areas. The location of retailers began to change, with a shift first to downtown shopping districts and later to suburban shopping malls. Commercial radio began in 1922, and advertising expenditures grew from $113 million in 1935 to $625 million in 1952. Commercial television was introduced in 1941, but wartime delayed the diffusion of televison. By 1954, half of the households in the U.S. had at least one television set. Expenditures on TV advertising grew rapidly from $454 million in 1952 to $2.5 billion in 1965 (Backman 1968). These changes affected the development of markets — for instance, new products could be introduced more rapidly and the available range of products was enhanced (Borden 1942).

Market Failure: Incomplete and Asymmetric Information

Because it is costly to acquire and process, the information held by buyers and sellers is necessarily incomplete and possibly unequal as well. However, full or “perfect” information is one of the analytical requirements for the proper functioning of competitive markets — so what happens when information is imperfect or unequal? Suppose, for example, that firms charge different prices for identical products, but some consumers (tourists) are ignorant of the dispersion of prices available in the marketplace. For many years, this question was largely ignored by economists, but two contributions sparked a revolution in economic thinking. Stigler (1961) showed that because information is costly to acquire, consumer search for lower prices will be less than complete. As a result, a dispersion of prices can persist and the “law of one price” is violated. The dispersion will be less if the product represents a large expenditure (e.g., autos), since more individual search is supported and suppliers have an extra incentive to promote the product. Because information has public good characteristics, imperfect information provides a rationale for government intervention, but profit-seeking firms also have reasons to reduce search costs through advertising and brand names. Akerlof (1970) took the analysis a step further by focusing on material aspects of a product that are known to the seller, but not by potential buyers. In Akerlof’s “lemons model,” the seller of a used car has private knowledge of defects, but potential buyers have difficulty distinguishing between good used cars (“creampuffs”) and bad used cars (“lemons”). Under these circumstances, Akerlof showed that a market may not exist or only lower-quality products are offered for sale. Hence, asymmetric information can result in market failure, but a reputation for quality can reduce the uncertainty that consumers face due to hidden defects (Akerlof 1970; Richardson 2000; Stigler 1961).

Under some conditions, branding and advertising of products, including targeting of customer groups, can help reduce market imperfections. Because advertising has several purposes or functions, there is always uncertainty regarding its effects. First, advertising may help inform consumers of the existence of products and brands, better inform them about price and quality dimensions, or better match customers and brands (Nelson 1975). Indeed, the basic message in many advertisements is simply that the brand is available. Consumer valuations can reflect a joint product, which is the product itself and the information about it. However, advertising tends to focus on only the positive aspects of a product, and ignores the negatives. In various ways, advertisers sometimes inform consumers that their brand is “less bad” (Calfee 1997b). An advertisement that announces a particular automobile is more crash resistant also is a reminder that all cars are less than perfectly safe. Second, persuasive or “combative” advertising can serve to differentiate one firm’s brand from those of its rivals. As a consequence, a successful advertiser may gain some discretion over the price it charges (“market power”). Furthermore, reactions by rivals may drive industry advertising to excessive levels or beyond the point where net social benefits of advertising are maximized. In other words, excessive advertising may result from the inability of each firm to reduce advertising without similar reductions by its rivals. Because it illustrates a breakdown of desirable coordination, this outcome is an example of the “prisoners’ dilemma game.” Third, the costs of advertising and promotion by existing or incumbent firms can make it more difficult for new firms to enter a market and compete successfully due to an advertising-cost barrier to entry. Investments in customer loyalty or intangible brand equity are largely sunk costs. Smaller incumbents also may be at a disadvantage relative to their larger rivals, and consequently face a “barrier to mobility” within the industry. However, banning advertising can have much the same effect by making it more difficult for smaller firms and entrants to inform customers of the existence of their brands and products. For example, Russian cigarette producers were successful in banning television advertising by new western rivals. Given multiple effects, systematic empirical evidence is needed to help resolve the uncertainties regarding the effects of advertising (Bagwell 2005).

Substantial empirical evidence demonstrates that advertising of prices increases competition and lowers the average market price and variance of prices. Conversely, banning price advertising can have the opposite effect, but consumers might derive information from other sources — such as direct observation and word-of-mouth — or firms can compete more on quality (Kwoka 1984). Bans of price advertising also affect product quality indirectly by making it difficult to inform consumers of price-quality tradeoffs. Products for which empirical evidence demonstrates that advertising reduces the average price include toys, drugs, eyeglasses, optometric services, gasoline, and grocery products. Thus, for relatively homogeneous goods, banning price advertising is expected to increase average prices and make entry more difficult. A partial offset occurs if significant costs of advertising increases product prices.

The effects of a ban of persuasive advertising also are uncertain. In a differentiated product industry, it is possible that advertising expenditures are so large that an advertising ban reduces costs and product prices, thereby offsetting or defeating the purpose of the ban. For products that are well known to consumers (“mature” products), the presumption is that advertising primarily affects brand shares and has little impact on primary demand (Dekimpe and Hanssens 1995; Scherer and Ross 1990). Advertising bans tend to solidify market shares. Furthermore, most advertising bans are less than complete, such as the ban of broadcast advertising of cigarettes. Producers can substitute other media or use other forms of promotion, such as discount coupons, articles of apparel, and event sponsorship. Thus, government limitations on commercial speech for one product or media often lead to additional efforts to limit other promotions. This “slippery slope” effect is illustrated by the Federal Communications Commission’s fairness doctrine for advertising of cigarettes (discussed below).

The Industry Advertising-Sales Response Function

The effect of a given ban on market demand depends importantly on the nature of the relationship between advertising expenditures and aggregate sales. This relationship is referred to as the industry advertising-sales response function. Two questions regarding this function have been debated. First, it is not clear that a well-defined function exists at the industry level, since persuasive advertising primarily affects brand shares. The issue is the spillover, if any, from brand advertising to aggregate (primary) market demand. Two studies of successful brand advertising in the alcohol industry failed to reveal a spillover effect on market demand (Gius 1996; Nelson 2001). Second, if an industry-level response function exists, it should be subject to diminishing marginal returns, but it is unclear where diminishing returns begin (the inflection point) or the magnitude of this effect. Some analysts argue that diminishing returns only begin at high levels of industry advertising, and sharply increasing returns exist at moderate to low levels (Saffer 1993). According to this view, comprehensive bans of advertising will reduce market demand importantly. However, this argument is at odds with empirical evidence for a variety of mature products, which demonstrates diminishing returns over a broad range of outlays (Assmus et al. 1984; Tellis 2004). Simon and Arndt (1980) found that diminishing returns began immediately for a majority of 100-plus products. Furthermore, average advertising elasticities for most mature products are only about 0.1 in magnitude (Sethuraman and Tellis 1991). As a result, limited bans of advertising will not reduce sales of mature products or the effect is likely to be extremely small in magnitude. It is unlikely that elasticities this small could support the third prong of the Central Hudson test.

Suppose that advertising for a particular product convinces some consumers to use Brand X, and this results in more sales of the brand at a higher price. Are consumers better or worse off as a consequence? A shift in consumer preferences toward a fortified brand of breakfast cereal might be described as either a “shift in tastes,” an increase in demand for nutrition, or an increase in joint demand for the cereal and information. Because it concerns individual utility, it is not clear whether a “shift in tastes” reduces or increases consumer satisfaction. Social commentators usually respond that consumers just think they are better off or the demand effect is spurious in nature. Much of the social criticism of advertising is concerned with its pernicious effect on consumer beliefs, tastes, and desires. Vance Packard’s, The Hidden Persuaders (1957), was an early, but possibly misguided, effort along these lines (Rogers 1992). Packard wrote that advertisers can “channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences.” Of course, once a “hidden secret” is revealed, such manipulation is less effective in the marketplace for products due to cynicism toward advertisers or outright rejection of the advertising claims.

Dixit and Norman (1978) argued that because profit-maximizing firms tend to over-advertise, small decreases in advertising will raise consumer welfare. In their analysis, this result holds regardless of the change in tastes or what product features are being advertised. Becker and Murphy (1993) responded that advertising is usually a complement to products, so it is unclear that equilibrium prices will always be higher as advertising increases. Further, it does not follow that social welfare is higher without any advertising. Targeting by advertisers also helps to increase the efficiency of advertising and reduces the tendency to waste advertising dollars on uninterested consumers through redundant ads. Nevertheless, this common practice also is criticized by social commentators and regulatory agencies. In summary, the evaluation of advertising bans requires empirical evidence. Much of the evidence on advertising bans is econometric and most of it concerns two products, alcohol beverages and cigarettes.

Advertising Bans: Beverage Alcohol

In an interesting way, the history of alcohol consumption follows the laws of supply and demand. The consumption of ethyl alcohol as a beverage began some 10,000 years ago. Due to the uncertainties of contaminated water supplies in the West, alcohol is believed to have been the most popular and safe daily beverage for centuries (Valle 1998). In the East, boiled water in the form of teas solved the problem of potable beverages. Throughout the Middle Ages, beer and ale were drunk by common folk and wine by the affluent. Following the decline of the Roman Empire, the Catholic Church entered the profitable production of wines. Distillation of alcohol was developed in the Arab world in 700 A.D. and gradually spread to Europe, where distilled spirits were used ineffectively as a cure for plague in the 14th century. During the 17th century, several non-alcohol beverages became popular, including coffee, tea, and cocoa. In the late eighteenth century, religious sentiment turned against alcohol and temperance activity figured prominently in the concerns of the Baptist, Friends, Methodist, Mormon, Presbyterian, and Unitarian churches. It was not until the late nineteenth century that filtration and treatment made safe drinking water supplies more widely available.

During the colonial period, retail alcohol sellers were licensed by states, local courts, or town councils (Byse 1940). Some colonies fixed the number of licenses or bonded the retailer. Fixing of maximum prices by legislatures and the courts encouraged adulteration and misbranding by retailers. In 1829, the state of Maine passed the first local option law and in 1844, the territory of Oregon enacted a general prohibition law. Experimentation with statewide monopoly of the retail sale of alcohol began in 1893 in South Carolina. As early as 1897, federal regulation of labeling was enacted through the Bottling in Bond Act. Following the repeal of Prohibition in 1933, the Federal Alcohol Control Administration was created by executive order (O’Neill 1940). The Administration immediately set about creating “fair trade codes” that governed false and misleading advertising, unfair trade practices, and prices that were “oppressively high or destructively low.” These codes discouraged price and advertising competition, and encouraged shipping expansion by the major midwestern brewers (McGahan 1991). The Administration ceased to function in 1935 when the National Industrial Recovery Act was declared unconstitutional. The passage of Federal Alcohol Administration Act in 1935 created the Federal Alcohol Administration (FAA) within the Treasury Department, which regulated trade practices and enforced the producer permit system required by the Act. In 1939, the FAA was abolished and its duties were transferred to the Alcohol Tax Unit of the Internal Revenue Service (later named the Bureau of Alcohol, Tobacco, and Firearms). The ATF presently administers a broad range of provisions regarding the formulation, labeling, and advertising of alcohol beverages.

Alcohol Advertising: Analytical Methods

Three types of econometric studies examine the effects of advertising on the market demand for beverage alcohol. First, time-series studies examine the relationship between alcohol consumption and annual or quarterly advertising expenditures. Recent examples of such studies include Calfee and Scheraga (1994), Coulson et al. (2001), Duffy (1995, 2001), Lariviere et al. (2000), Lee and Tremblay (1992), and Nelson (1999). All of these studies find that advertising has no effect on total alcohol consumption and small or nonexistent effects on beverage demand (Nelson 2001). This result is not affected by disaggregating advertising to account for different effects by media (Nelson 1999). Second, cross-sectional and panel studies examine the relationship between alcohol consumption and state regulations, such as state bans of billboards. Panel studies combine cross-sectional (e.g., all 50 states) and time-series information (50 states for the period 1980-2000), which alters the amount of variation in the data. Third, cross-national studies examine the relationship between alcohol consumption and advertising bans for a panel of countries. This essay discusses results obtained in the second and third types of studies.

Background: State Regulation of Billboard Advertising

In the United States, the distribution and retail sale of alcohol beverages is regulated by the individual states. The Twenty-First Amendment, passed in 1933, repealed Prohibition and granted the states legal powers over the sale of alcohol, thereby resolving the conflicting interests of “wets” and “drys” (Goff and Anderson 1994; Munger and Schaller 1997; Shipman 1940; Strumpf and Oberholzer-Gee 2000). As a result, alcohol laws vary importantly by state, and these differences represent a natural experiment with regard to the economic effects of regulation. Long-standing differences in state laws potentially affect the organization of the industry and alcohol demand, reflecting incentives that alter or shape individual behaviors. State laws also differ by beverage, suggesting that substitution among beverages is one possible consequence of regulation. For example, state laws for distilled spirits typically are more stringent than similar laws applied to beer and wine. While each state has adopted its own unique regulatory system, several broad categories can be identified. Following repeal, eighteen states adopted public monopoly control of the distribution of distilled spirits. Thirteen of these states operate off-premise retail stores for the sale of spirits, and two states also control retail sales of table wine. In five states, only the wholesale distribution of distilled spirits is controlled. No state has monopolized beer sales, but laws in three states provide for restrictions on private beer sales by alcohol content. In the private license states, an Alcohol Beverage Control (ABC) agency determines the number and type of retail licenses, subject to local wet-dry options. Because monopoly states have broad authority to restrict the marketing of alcohol, the presumption is that total alcohol consumption will be lower in the control states compared to the license states. Monopoly control also raises search costs by restricting outlet numbers, hours of operation, and product variety. Because beer and wine are substitutes or complements for spirits, state monopoly control can increase or decrease total alcohol use, or the net effect may be zero (Benson et al. 1997; Nelson 1990, 2003a).

A second broad experiment includes state regulations banning advertising of alcohol beverages or which restrict the advertising of prices. Following repeal, fourteen states banned billboard advertising of distilled spirits, including seven of the license states. Because the bans have been in existence for many years and change infrequently over time, these regulations provide evidence on the long-term effectiveness of advertising bans. It is often argued that billboards have an important effect on youth behaviors, and this belief has been a basis for municipal ordinances banning billboard advertising of tobacco and alcohol. Given long-standing bans, it might be expected that youth alcohol behaviors will show up as cross-state differences in adult per capita consumption. Indeed, these two variables are highly correlated (Cook and Moore 2000, 2001). Further, fifteen states banned price advertising by retailers using billboards, newspapers, and visible store displays. In general, a ban of price advertising reduces retail competition and increases search costs of consumers. However, these regulations were not intended to advance temperance, but rather were anti-competitive measures obtained by alcohol retailers (McGahan 1995). For example, in 44 Liquormart (1996) the lower court noted that Rhode Island’s ban of price advertising was designed to protect smaller retailers from in-state and out-of-state competition, and was closely monitored by the liquor retailers association. A price advertising ban could reduce alcohol consumption by elevating full prices (search costs plus monetary prices). Because many states banned only price advertising of spirits, substitution among beverages also is a possible outcome.

Table 1 illustrates historical changes since 1935 in alcohol consumption in the United States and three individual states. Also, Table 1 shows nominal and real advertising expenditures for the U.S. After peaking in the early 1980s, per capita alcohol consumption is now at roughly the level experienced in the early 1960s. Nationally, the decline in alcohol consumption from 1980 to 2000 was 21.0%. This decline has occurred despite continued high levels of advertising and promotion. At the state-level, the percentage changes in consumption are Illinois, -25.3%; Ohio, -15.5%; and Pennsylvania, -20.5%. Pennsylvania is a state monopoly for spirits and wines and also banned price advertising of alcohol, including beer, prior to 1997. However, the change in per capita consumption in Pennsylvania parallels what has occurred nationally.

Econometric Results: State-Level Studies of Billboard Bans

Seven econometric studies estimate the relationship between state billboard bans and alcohol consumption: Hoadley et al. (1984), Nelson (1990, 2003a), Ornstein and Hanssens (1985), Schweitzer et al. (1983), and Wilkinson (1985, 1987). Two studies used a single year, but the other five employed panel data covering five to 25 years. Two studies estimated demand functions for beer or distilled spirits only, which ignores substitution. None of the studies obtained a statistically significant reduction in total alcohol consumption due to bans of billboards. In several studies, billboard bans increased spirits consumption significantly. A positive effect of a ban is contrary to general expectations, but consistent with various forms of substitution. The study by Nelson (2003a) covered 45 states for the time period 1982-1997. In contrast to earlier studies, Nelson (2003a) focused on substitution among alcohol beverages and the resulting net effect on total ethanol consumption. Several subsamples were examined, including all 45 states, ABC-license states, and two time periods, 1982-1988 and 1989-1997. A number of other variables also were considered, including prices, income, tourism, age demographics, and the minimum drinking age. During both time periods, state billboard bans increased consumption of wine and spirits, and reduced consumption of beer. The net effect on total ethanol consumption was significantly positive during 1982-1988, and insignificant thereafter. During both time periods, bans of price advertising of spirits were associated with lower consumption of spirits, higher consumption of beer, and no effect on wine or total alcohol consumption. The results in this study demonstrate that advertising regulations have different effects by beverage, indicating the importance of substitution. Public policy statements that suggest that limited bans have a singular effect are ignoring market realities. The empirical results in Nelson (2003a) and other studies are consistent with the historic use of billboard bans as a device to suppress competition, with little or no effect on temperance.

Econometric Results: Cross-National Studies of Broadcast Bans

Many Western nations have restrictions on radio and television advertising of alcohol beverages, especially distilled spirits. These controls range from time-of-day restrictions and content guidelines to outright bans of broadcast advertising of all alcohol beverages. Until quite recently, the trend in most countries has been toward stricter rather than more lenient controls. Following repeal, U.S. producers of distilled spirits adopted a voluntary Code of Good Practice that barred radio advertising after 1936 and television advertising after 1948. When this voluntary agreement ended in late 1996, cable television stations began carrying ads for distilled spirits. The major TV networks continued to refuse such commercials. Voluntary or self-regulatory codes also have existed in a number of other countries, including Australia, Belgium, Germany, Italy, and Netherlands. By the end of the 1980s, a number of countries had banned broadcast advertising of spirits, including Austria, Canada, Denmark, Finland, France, Ireland, Norway, Spain, Sweden, and United Kingdom (Brewers Association of Canada 1997).

Table 1
Advertising and Alcohol Consumption (gallons of ethanol per capita, 14+ yrs)

Illinois Ohio Pennsylvania U.S. Alcohol Ads Real Ads Real Ads Percent
Year (gal. p.c.) (gal. p.c.) (gal. p.c.) (gal. p.c.) (mil. $) (mil. 96$) per capita Broadcast
1935 1.20
1940 1.56
1945 2.25
1950 2.04
1955 2.00
1960 2.07
1965 2.27 242.2 1018.5 7.50 38.7
1970 2.82 2.22 2.28 2.52 278.4 958.0 6.41 34.7
1975 2.99 2.21 2.35 2.69 395.6 979.9 5.99 44.0
1980 3.00 2.33 2.39 2.76 906.9 1580.5 8.83 55.1
1981 2.91 2.25 2.37 2.76 1014.9 1618.7 8.91 56.6
1982 2.83 2.28 2.36 2.72 1108.7 1667.0 9.07 58.1
1983 2.80 2.22 2.29 2.69 1182.9 1708.4 9.18 62.0
1984 2.77 2.26 2.25 2.65 1284.4 1788.9 9.50 66.0
1985 2.72 2.20 2.22 2.62 1293.0 1746.1 9.16 68.2
1986 2.68 2.17 2.23 2.58 1400.2 1850.6 9.61 73.5
1987 2.66 2.17 2.20 2.54 1374.7 1766.1 9.09 73.5
1988 2.64 2.11 2.11 2.48 1319.4 1639.8 8.37 74.4
1989 2.56 2.07 2.10 2.42 1200.4 1436.6 7.27 68.2
1990 2.62 2.09 2.15 2.45 1050.4 1209.7 6.10 64.8
1991 2.48 2.03 2.05 2.30 1119.5 1247.2 6.22 66.4
1992 2.43 1.98 1.99 2.30 1074.7 1172.0 5.78 68.5
1993 2.38 1.95 1.96 2.23 970.7 1030.9 5.04 70.4
1994 2.35 1.85 1.93 2.18 1000.9 1041.1 5.03 69.4
1995 2.29 1.90 1.86 2.15 1027.5 1046.4 5.00 68.2
1996 2.30 1.93 1.86 2.16 1008.8 1008.8 4.77 68.5
1997 2.26 1.91 1.84 2.14 1087.0 1069.2 5.01 66.5
1998 2.25 1.97 1.86 2.14 1187.6 1154.6 5.36 66.3
1999 2.27 2.00 1.87 2.16 1242.2 1189.5 5.45 64.2
2000 2.24 1.97 1.90 2.18 1422.6 1330.8 5.89 62.8

Sources: 1965-70 ad data from Adams-Jobson Handbooks; 1975-91 data from Impact; and 1992-2000 data from LNA/Competitive Media. Nominal data deflated by the GDP implicit price deflator (1996 = 100). Alcohol data from National Institute on Alcohol Abuse and Alcoholism, U.S. Apparent Consumption of Alcoholic Beverages (1997) and 2003 supplement. Real advertising per capita is for ages 14+ based on NIAAA and author’s population estimates.

The possible effects of broadcast bans are examined in four studies: Nelson and Young (2001), Saffer (1991), Saffer and Dave (2002), and Young (1993). Because alcohol behavior or “cultural sentiment” varies by country, it is important that the social setting is considered. In particular, the level of alcohol consumption in the wine-drinking countries is substantially greater. In France, Italy, Luxembourg, Portugal, and Spain, alcohol consumption is about one-third greater than average (Nelson and Young 2001). Further, 20 to 25% of consumption in the Scandinavian countries is systematically under-reported due to cross-border purchases, smuggling, and home production. In contrast to other studies, Nelson and Young (2001) accounted for these differences. The study examined alcohol demand and related behaviors in a sample of 17 OECD countries (western Europe, Canada, and the U.S.) for the period 1977 to 1995. Control variables included prices, income, tourism, age demographics, unemployment, and drinking sentiment. The results indicated that bans of broadcast advertising of spirits did not decrease per capita alcohol consumption. During the sample period, five countries adopted broadcast bans of all alcohol beverage advertisements, apart from light beer (Denmark, Finland, France, Norway, Sweden). The regression estimates for complete bans were insignificantly positive. The results indicated that bans of broadcast advertising had no effect on alcohol consumption relative to countries that did not ban broadcast advertising. For the U.S., the cross-country results are consistent with studies of successful brands, studies of billboard bans, and studies of advertising expenditures (Nelson 2001). The results are inconsistent with an advertising-response function with a well-defined inflection point.

Advertising Bans: Cigarettes

Prior to 1920, consumption of tobacco in the U.S. was mainly in the form of cigars, pipe tobacco, chewing tobacco, and snuff. It was not until 1923 that cigarette consumption by weight surpassed that of cigars (Forey et al. 2002). Several early developments contributed to the rise of the cigarette (Borden 1942). First, the Bonsak cigarette-making machine was patented in 1880 and perfected in 1884 by James Duke. Second, the federal excise tax on cigarettes, instituted to help pay for the Civil War, was reduced in 1883 from $1.75 to 50 cents a thousand pieces. Third, during World War I, cigarette consumption by soldiers was encouraged by ease of use and low cost. Fourth, the taboo against public smoking by women began to wane, although participation by women remained substantially below that of men. By 1935, about 50% of men smoked compared to only 20% of women. Fifth, advertising has been credited with expanding the market for lighter-blends of tobacco, although evidence in support of this claim is lacking (Tennant 1950). Some early advertising claims were linked to health, such as a 1928 ad for Lucky Strike that emphasized, “No Throat Irritation — No Cough.” During this time, the FTC banned numerous health claims by de-nicotine products and devices, e.g., 10 FTC 465 (1925).

Cigarette advertising has been especially controversial since the early 1950s, reflecting known health risks associated with smoking and the belief that advertising is a causal factor in smoking behaviors. Warning labels on cigarette packages were first proposed in 1955, following new health reports by the American Cancer Society, the British Medical Research Council, and Reader’s Digest (1952). Regulation of cigarette advertising and marketing, especially by the FTC, increased over the years to include content restrictions (1942, 1950-52); advertising guidelines (1955, 1960, 1966); package warning labels (1965, 1970, 1984); product testing and labeling (1967, 1970); public reporting on advertising trends (1964, 1967, 1981); warning messages in advertisements (1970); and advertising bans (1971, 1998). The history of these regulations is discussed below.

Background: Cigarette Prohibition and Early Health Reports

During the 17th century, several of the northern colonies banned public smoking. In 1638, the Plymouth colony passed a law forbidding smoking in the streets and, in 1798, Boston banned the carrying of a lighted pipe or cigar in public. Beginning around 1850, a number of anti-tobacco groups were formed (U.S. Surgeon General 2000), including the American Anti-Tobacco Society in 1849, American Health and Temperance Association (1878), Anti-Cigarette League (1899), Non-Smokers Protective League (1911), and the Department of Narcotics of the Women’s Christian Temperance Union (1883). The WCTU was a force behind the cigarette prohibition movement in Canada and the U.S. During the Progressive Era, fifteen states passed laws prohibiting the sale of cigarettes to adults and another twenty-one states considered such laws (Alston et al. 2002). North Dakota and Iowa were the first states to adopt smoking bans in 1896 and 1897, respectively. In West Virginia, cigarettes were taxed so heavily that they were de facto prohibited. In 1920, Lucy Page Gaston of the WCTU made a bid for the Republican nomination for president on an anti-tobacco platform. However, the movement waned as the laws were largely unenforceable. By 1928, cigarettes were again legal for sale to adults in every state.

As the popularity of cigarette smoking spread, so too did concerns about its health consequences. As a result, the hazards of smoking have long been common knowledge. A number of physicians took early notice of a tobacco-cancer relationship in their patients. In 1912, Isaac Adler published a book on lung cancer that implicated smoking. In 1928, adverse health effects of smoking were reported in the New England Journal of Medicine. A Scientific American report in 1933 tentatively linked cigarette “tars” to lung cancer. Writing in Science in 1938, Raymond Pearl of Johns Hopkins University demonstrated a statistical relationship between smoking and longevity (Pearl 1938). The addictive properties of nicotine were reported in 1942 in the British medical journal, The Lancet. These and other reports attracted little attention from the popular press, although Reader’s Digest (1924, 1941) was an early crusader against smoking. In 1950, three classic scientific papers appeared that linked smoking and lung cancer. Shortly thereafter, major prospective studies began to appear in 1953-54. At this time, the research findings were more widely reported in the popular press (e.g., Time 1953). In 1957, the Public Health Service accepted a causal relationship between smoking and lung cancer (Burney 1959; Joint Report 1957). Between 1950 and 1963, researchers published more than 3,000 articles on the health effects of smoking.

Cigarette Advertising: Analytical Methods

Given the rising concern about the health effects of smoking, it is not surprising that cigarette advertising would come under fire. The ability of advertising to stimulate primary demand is not debated by public health officials, since in their eyes cigarette advertising is inherently deceptive. The econometric evidence is much less clear. Three methods are used to assess the relationship between cigarette consumption and advertising. First, time-series studies examine the relationship between cigarette consumption and annual or quarterly advertising expenditures. These studies have been reviewed several times, including comprehensive surveys by Cameron (1998), Duffy (1996), Lancaster and Lancaster (2003), and Simonich (1991). Most time-series studies find little or no effect of advertising on primary demand for cigarettes. For example, Duffy (1996) concluded that “advertising restrictions (including bans) have had little or no effect upon aggregate consumption of cigarettes.” A meta-analysis by Andrews and Franke (1991) found that the average elasticity of cigarette consumption with respect to advertising expenditure was only 0.142 during 1964-1970, and declined to -0.007 thereafter. Second, cross-national studies examine the relationship between per capita cigarette consumption and advertising bans for a panel of countries. Third, several time-series studies examine the effects of health scares and the 1971 ban of broadcast advertising. This essay discusses results obtained in the second and third types of econometric studies.

Econometric Results: Cross-National Studies of Broadcast Bans

Systematic tests of the effect of advertising bans are provided by four cross-national panel studies that examine annual per capita cigarette consumption among OECD countries: Laugesen and Meads (1991); Stewart (1993); Saffer and Chaloupka (2000); and Nelson (2003b). Results in the first three studies are less than convincing for several reasons. First, advertising bans might be endogenously determined together with cigarette consumption, but earlier studies treated advertising bans as exogenous. In order to avoid the potential bias associated with endogenous regressors, Nelson (2003b) estimated a structural equation for the enabling legislation that restricts advertising. Second, annual data on cigarette consumption contain pronounced negative trends, and the data series in levels are unlikely to be stationary. Nelson (2003b) tested for unit roots and used consumption growth rates (log first-differences) to obtain stationary data series for a sample of 20 OECD countries. Third, the study also tested for structural change in the smoking-advertising relationship. The motivation was based on the following set of observations: by the mid-1960s the risks associated with smoking were well known and cigarette consumption began to decline in most countries. For example, per capita consumption in the United States increased to an all-time high in 1963 and declined modestly until about 1978. Between 1978 and 1995, cigarette consumption in the U.S. declined on average by -2.44% per year. Further, the decline in consumption was accompanied by reductions in smoking prevalence. In the U.S., male smoking prevalence declined from 52% of the population in 1965 to 33% in 1985 and 27% in 1995 (Forey et al. 2002). Smoking also is increasingly concentrated among individuals with lower incomes or lower levels of education (U.S. Public Health Service 1994). Changes in prevalence suggest that the sample of smokers will not be homogeneous over time, which implies that empirical estimates may not be robust across different time periods.

Nelson (2003b) focused on total cigarettes, defined as the sum of manufactured and hand-rolled cigarettes for 1970-1995. Data on cigarette and tobacco consumption were obtained from International Smoking Statistics (Forey et al. 2002). This comprehensive source includes estimates of sales in OECD countries for manufactured cigarettes, hand-rolled cigarettes, and total consumption by weight of all tobacco products. The data series begin around 1948 and extend to 1995. Regulatory information on advertising bans and health warnings were obtained from Health New Zealand’s International Tobacco Control Database and the World Health Organization’s International Digest of Health Legislation. For each country and year, HNZ reports the media in which cigarette advertising are banned. Nine media are covered, including television, radio, cinema, outdoor, newspapers, magazines, shop ads, sponsorships, and indirect advertising such as brand names on non-tobacco products. Based on these data, three dummy variables were defined: TV-RADIO (= 1 if only television and radio are banned, zero otherwise); MODERATE (= 1 if 3 or 4 media are banned); and STRONG (= 1 if 5 or more media are banned). On average, 4 to 5 media were banned in the 1990s compared to only 1 or 2 in the 1970s. Except for Austria, Japan and Spain, all OECD countries by 1995 had enacted moderate or strong bans of cigarette advertising. In 1995, there were 9 countries in the strong category compared to 5 in 1990, 4 in 1985, and only 3 countries in 1980 and earlier. Additional control variables in the study included prices, income, warning labels, unemployment rates, percent filter cigarettes, and demographics.

The results in Nelson (2003b) indicate that cigarette consumption is determined importantly by prices, income, and exogenous country-specific factors. The dummy variables for advertising bans were never significantly negative. The income elasticity was significantly positive and the price elasticity was significantly negative. The price elasticity estimate of -0.39 is identical to the consensus estimate of -0.4 for aggregate data (Chaloupka and Warner 2000). Beginning about 1985, the decline in smoking prevalence resulted in a shift in price and income elasticities. There also was a change in the political climate favoring additional restrictions on advertising that followed rather than caused reductions in smoking and smoking prevalence, which is “reverse causality.” Thus, advertising bans had no demonstrated influence on cigarette demand in the OECD countries, including the U.S. The advertising-response model that motivates past studies is not supported by these results. Data and estimation procedures used in three previous studies are picking-up the substantial declines in consumption that began in the late-1970s, which were unrelated to major changes in advertising restrictions.

Background: Regulation of Cigarettes by the Federal Trade Commission

At the urging of President Wilson, the Federal Trade Commission (FTC) was created by Congress in 1914. The Commission was given the broad mandate to prevent “unfair methods of competition.” From the very beginning, this mandate was interpreted to include false and deceptive advertising, even though advertising per se was not an antitrust issue. Indeed, the first cease-and-desist order issued by the FTC concerned false advertising, 1 FTC 13 (1916). It was the age of the patent medicines and health-claims devices. As early as 1925, FTC orders against false and misleading advertising constituted 75 percent of all orders issued each year. However, in Raladam (1931) the Supreme Court held that false advertising could be prevented only in situations where injury to a competitor could be demonstrated. The Wheeler-Lea Act of 1938 added a prohibition of “unfair or deceptive acts or practices” in or affecting commerce. This amendment broadened Section 5 of the FTC Act to include consumer interests as well as business concerns. The FTC could thereafter proceed against unfair and deceptive methods without regard to alleged effects on competitors.

As an independent regulatory agency, the FTC has rulemaking and adjudicatory authorities (Fritschler and Hoefler 1996). Its rulemaking powers are quasi-legislative, including the authority to hold hearings and trade practice conferences, subpoena witnesses, conduct investigations, and issue industry guidelines and proposals for legislation. Its adjudicatory powers are quasi-judicial, including the authority to issue cease-and-desist orders, consent decrees, injunctions, trade regulation rules, affirmative disclosure and substantiation orders, corrective advertising orders, and advisory opinions. Administrative complaints are adjudicated before an administrative law judge in trial-like proceedings. Rulemaking by the FTC is characterized by broad applicability to all firms in an industry, whereas judicial policy is based on a single case and affects directly only those named in the suit. Of course, once a precedent is established, it may affect other firms in the same situation. Lacking a well-defined constituency, except possibly small business, the FTC’s use of its manifest powers has always been controversial (Clarkson and Muris 1981; Hasin 1987; Miller 1989; Posner 1969, 1973; Stone 1977).

Beginning in 1938, the FTC used its authority to issue “unfair and deceptive” advertising complaints against the major cigarette companies. These actions, known collectively as the “health claims cases,” resulted in consent decrees or cease-and-desist orders involving several major brands during the 1940s and early 1950s. As several cases neared the final judgment phase, in September 1954 the FTC sent a letter to all companies proposing a seven-point list of advertising standards in light of “scientific developments with regard to the [health] effects of cigarette smoking.” A year later, the FTC issued its Cigarette Advertising Guides, which forbade any reference to physical effects of smoking and representations that a brand of cigarette is low in nicotine or tars that “has not been established by a competent scientific proof.” Following several articles in Reader’s Digest, cigarette advertising in 1957-1959 shifted to emphasis on tar and nicotine reduction during the “tar derby.” The FTC initially tolerated these ads if based on tests conducted by Reader’s Digest or Consumer Reports. In 1958, the FTC hosted a two-day conference on tar and nicotine testing, and in 1960 it negotiated a trade practice agreement that “all representations of low or reduced tar or nicotine, whether by filtration or otherwise, will be construed as health claims.” This action was blamed for halting a trend toward increased consumption of lower-tar cigarettes (Calfee 1997a; Neuberger 1963). The FTC vacated this agreement in 1966 when it informed the companies that it would no longer consider advertising that contained “a factual statement of tar and nicotine content” a violation of its Advertising Guides.

On January 11, 1964, the Surgeon General’s Advisory Committee on Smoking and Health issued its famous report on Smoking and Health (U.S. Surgeon General 1964). One week after the report’s release, the FTC initiated proceedings “for promulgation of trade regulation rules regarding unfair and deceptive acts or practices in the advertising and labeling of cigarettes” (notice, 29 Fed Reg 530, January 22, 1964; final rule, 29 Fed Reg 8325, July 2, 1964). The proposed Rule required that all cigarette packages and advertisements disclose prominently the statement, “Caution: Cigarette smoking is dangerous to health [and] may cause death from cancer and other diseases.” Failure to include the warning would be regarded as a violation of the FTC Act. The industry challenged the Rule on grounds that the FTC lacked the statutory authority to issue industry-wide trade rules, absent congressional guidance. The major companies also established their own Cigarette Advertising Code, which prohibited advertising aimed at minors, health-related claims, and celebrity endorsements.

The FTC’s Rule resulted in several congressional bills that culminated in the Federal Cigarette Labeling and Advertising Act of 1965 (P.L. 89-92, effective Jan. 1, 1966). The Labeling Act required each cigarette package to contain the statement, “Caution: Cigarette Smoking May Be Hazardous to Your Health.” According to the Act’s declaration of policy, the warnings were required so that “the public may be adequately informed that cigarette smoking may be hazardous to the health.” The Act also required the FTC to report annually to Congress concerning (a) the effectiveness of cigarette labeling, (b) current practices and methods of cigarette advertising and promotion, and (c) such recommendations for legislation as it may deem appropriate. Beginning in 1967, the FTC commenced its annual reporting to Congress on advertising of cigarettes. It recommended that health warning be extended to advertising and strengthened to conform to its original proposal, and it called for research on less-hazardous cigarettes. These recommendations were repeated in 1968 and 1969, and a recommendation was added that advertising on television and radio should be banned.

Several other important regulatory actions also took place in 1967-1970. First, the FTC established a laboratory to conduct standardized testing of tar and nicotine content for each brand. In November 1967, the FTC commenced public reporting of tar and nicotine levels by brand, together with reports of overall trends in smoking behaviors. Second, in June of 1967, the Federal Communications Commission (FCC) ruled that the “fairness doctrine” was applicable to cigarette advertising, which resulted in numerous free anti-smoking commercials by the American Cancer Society and other groups during July 1967 to December 1970.2 Third, in early 1969 the FCC issued a notice of proposed rulemaking to ban broadcast advertising of cigarettes (34 Fed Reg 1959, Feb. 11, 1969). The proposal was endorsed by the Television Code Review Board of the National Association of Broadcasters, and its enactment was anticipated by some industry observers. Following the FCC’s proposal, the FTC issued a notice of proposed rulemaking (34 Fed Reg 7917, May 20, 1969) to require more forceful statements on packages and extend the warnings to all advertising as a modification of its 1964 Rule in the “absence of contrary congressional direction.” Congress again superseded the FTC’s actions, and passed the Public Health Smoking Act of 1969 (P.L. 91-222, effective Nov. 1, 1970), which banned broadcast advertising after January 1, 1971 and modified the package label to read, “Warning: The Surgeon General Has Determined that Cigarette Smoking Is Dangerous to Your Health.” In 1970, the FTC negotiated agreements with the major companies to (1) disclose tar and nicotine levels in cigarette advertising using the FTC Test Method, and (2) include the health warning in advertising. By 1972, the FTC believed that it had achieved the recommendations in its initial reports to Congress.3

In summary, the FTC has engaged in continuous surveillance of cigarette advertising and marketing practices. Industry-wide regulation began in the early 1940s. As a result, the advertising of cigarettes in the U.S. is more restricted than other lawful consumer products. Some regulations are primarily informational (warning labels), while others affect advertising levels directly (broadcast ban). During a six-decade period, the FTC regulated the overall direction of cigarette marketing, including advertising content and placement, warning labels, and product development. Through its testing program, it has influenced the types of cigarettes produced and consumed. The FTC engaged in continuous monitoring of cigarette advertising practices and prepared in-depth reports on these practices; it held hearings on cigarette testing, advertising, and labeling; and it issued consumer advisories on smoking. Directly or indirectly, the FTC has initiated or influenced promotional and product developments in the cigarette industry. However, it remains to be shown that these actions had an important or noticeable effect on cigarette consumption and/or industry advertising expenditures. Is there empirical evidence that federal regulation has affected aggregate cigarette consumption or advertising? If the answer is negative or the effects are limited in magnitude, it suggests that the Congressional and FTC actions after 1964 did not add materially to information already in the marketplace or these actions were otherwise misguided.4

Table 2 displays information on smoking prevalence, cigarette consumption, and advertising. Smoking prevalence has declined considerably compared to the 1950s and 1960s. Consumption per capita reached an all-time high in 1963 (4,345 cigarettes per capita) and began a steep decline around 1978. By 1985, consumption was below the level experienced in 1947. Cigarette promotion has changed greatly over the years as producers substituted away from traditional advertising media. As reported by the FTC, the category of non-price promotions includes expenditures on point-of-sale displays, promotional allowances, samples, specialty items, public entertainment, direct mail, endorsements and testimonials, internet, and audio-visual ads. The shift away from media advertising reflects the broadcast and billboard bans as well as the controversies that surround advertising of cigarettes. As a result, spending on traditional media now amounts to only $356 million, or about 7% of the total marketing outlay of $5.0 billion. Clearly, regulation has affected the type of promotion, but not the overall expenditure.

Econometric Results: U.S. Time-Series Studies of the 1971 Advertising Ban

Several econometric studies examine the effects of the 1971 broadcast ban on cigarette demand, including Franke (1994), Gallet (1999), Ippolito et al. (1979), Kao and Tremblay (1988), and Simonich (1991). None of these studies found that the 1971 broadcast ban had a noticeable effect on cigarette demand. The studies by Franke and Simonich employed quarterly data on cigarette sales. The study by Ippolito et al. covered an extended time period from 1926 to 1975. The studies by Gallet and Kao and Tremblay employed simultaneous-equations methods, but each study concluded that the broadcast advertising ban did not have a significant effect on cigarette demand. Although health reports in 1953 and 1964 may have reduced the demand for tobacco, the results do not support a negative effect of the 1971 Congressional broadcast ban. By 1964 or earlier, the adverse effects of smoking appear to have been incorporated in consumers’ decisions regarding smoking. Hence, the advertising restrictions did not contribute to consumer information and therefore did not affect cigarette consumption.

Conclusions

The First Amendment protects commercial speech, although the degree of protection afforded is less than political speech. Commercial speech jurisprudence has changed profoundly since Congress passed a flat ban on broadcast advertising of cigarettes in 1971. The courts have recognized the vital need for consumers to be informed about market conditions — an environment that is conducive to operation of competitive markets. The Central Hudson test requires the courts and agencies to balance the benefits and costs of censorship. The third-prong of the test requires that censorship must directly and materially advance a substantial goal. This essay has discussed the difficulty of establishing a material effect of limited and comprehensive bans of alcohol and cigarette advertisements.

Sales per cap. 5-media Non-Price Total per cap.

Table 2
Advertising and Cigarette Consumption

Prevalence: Total Cig Sales Cigs
per cap.
Ad Spending:
5-media
Promotion:
Non-Price
Real Total Real Total
per cap.
Male Female
Year (%) (%) (bil.) (ages 18+) (mil. $) (mil. $) (mil 96$) (ages 18+)
1920 44.6 665
1925 79.8 1,085
1930 119.3 1,485 26.0 213.1
1935 53 18 134.4 1,564 29.2 286.3
1940 181.9 1,976 25.3 245.6
1947 345.4 3,416 44.1 269.7 2.70
1950 54 33 369.8 3,552 65.5 375.4 3.61
1955 50 24 396.4 3,597 104.6 528.8 4.83
1960 47 27 484.4 4,171 193.1 870.2 7.53
1965 52 34 528.8 4,258 249.9 1050.9 8.49
1970 44 31 536.5 3,985 296.6 64.4 1242.3 9.26
1975 39 29 607.2 4,122 330.8 160.5 1227.3 8.28
1980 38 29 631.5 3,849 790.1 452.2 2177.9 13.29
1985 33 28 594.0 3,370 932.0 1544.4 3360.6 19.09
1986 583.8 3,274 796.3 1586.1 3163.5 17.78
1987 32 27 575.0 3,197 719.2 1861.3 3326.2 18.49
1988 31 26 562.5 3,096 824.5 1576.3 2993.1 16.44
1989 540.0 2,926 868.3 1788.7 3190.8 17.35
1990 28 23 525.0 2,817 835.2 1973.0 3246.1 17.52
1991 28 24 510.0 2,713 772.6 2054.6 3153.2 16.86
1992 28 25 500.0 2,640 621.5 2435.0 3328.1 17.62
1993 28 23 485.0 2,539 542.1 2933.9 3695.9 19.38
1994 28 23 486.0 2,524 545.1 3039.5 3733.6 19.41
1995 27 23 487.0 2,505 564.2 2982.6 3615.5 18.62
1996 487.0 2,482 578.2 3220.8 3799.0 19.37
1997 28 22 480.0 2,423 575.7 3561.4 4058.0 20.47
1998 26 22 465.0 2,320 645.6 3908.0 4412.4 22.03
1999 26 22 435.0 2,136 487.7 4659.0 4918.0 24.29
2000 26 21 430.0 2,092 355.8 5015.0 5043.0 24.53
Sources: Smoking prevalence and cigarette sales from Forey et al (2002) and U.S. Public Health Service (1994). Data on advertising compiled by the author from FTC Reports to Congress (various issues); 1930-1940 data derived from Borden (1942). Nominal data deflated by the GDP implicit price deflator (1996=100). Advertising expenditures include TV, radio, newspapers, magazine, outdoor and transit ads. Promotions exclude price-promotions using discount coupons and retail value-added offers (“buy one, get one free”). Real total includes advertising and non-price promotions.

Law Cases

44 Liquormart, Inc., et al. v. Rhode Island and Rhode Island Liquor Stores Assoc., 517 U.S. 484 (1996).

Central Hudson Gas & Electric Corp. v. Public Service Commission of New York, 447 U.S. 557 (1980).

Federal Trade Commission v. Raladam Co., 283 U.S. 643 (1931).

Food and Drug Administration, et al. v. Brown & Williamson Tobacco Corp., et al., 529 U.S. 120 (2000).

Lorillard Tobacco Co., et al. v. Thomas F. Reilly, Attorney General of Massachusetts, et al., 533 U.S. 525 (2001).

Red Lion Broadcasting Co. Inc., et al. v. Federal Communications Commission, et al., 395 U.S. 367 (1969).

Valentine, Police Commissioner of the City of New York v. Chrestensen, 316 U.S. 52 (1942).

Virginia State Board of Pharmacy, et al. v. Virginia Citizens Consumer Council, Inc., et al., 425 U.S. 748 (1976).

References

Akerlof, George A. “The Market for ‘Lemons': Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84 (1970): 488-500.

Alston, Lee J., Ruth Dupre, and Tomas Nonnenmacher. “Social Reformers and Regulation: The Prohibition of Cigarettes in the U.S. and Canada.” Explorations in Economic History 39 (2002): 425-45.

Andrews, Rick L. and George R. Franke. “The Determinants of Cigarette Consumption: A Meta-Analysis.” Journal of Public Policy & Marketing 10 (1991): 81-100.

Assmus, Gert, John U. Farley, and Donald R. Lehmann. “How Advertising Affects Sales: Meta-Analysis of Econometric Results.” Journal of Marketing Research 21 (1984): 65-74.

Backman, Jules. Advertising and Competition. New York: New York University Press, 1967.

Bagwell, Kyle. “The Economic Analysis of Advertising.” In Handbook of Industrial Organization, vol. 3, edited by M. Armstrong and R. Porter. Amsterdam: North-Holland, forthcoming 2005.

Becker, Gary and Kevin Murphy. “A Simple Theory of Advertising as a Good or Bad,” Quarterly Journal of Economics 108 (1993): 941-64.

Benson, Bruce L., David W. Rasmussen, and Paul R. Zimmerman. “Implicit Taxes Collected by State Liquor Monopolies.” Public Choice 115 (2003): 313-31.

Borden, Neil H. The Economic Effects of Advertising. Chicago: Irwin, 1942.

Brewers Association of Canada. Alcoholic Beverage Taxation and Control Policies: International Survey, 9th ed. Ottawa: BAC, 1997.

Burney, Leroy E. “Smoking and Lung Cancer: A Statement of the Public Health Service.” Journal of the American Medical Association 171 (1959): 135-43.

Byse, Clark. “Alcohol Beverage Control Before Repeal.” Law and Contemporary Problems 7 (1940): 544-69.

Calfee, John E. “The Ghost of Cigarette Advertising Past.” Regulation 20 (1997a): 38-45.

Calfee, John E. Fear of Persuasion: A New Perspective on Advertising and Regulation. LaVergne, TN: AEI, 1997b.

Calfee, John E. and Carl Scheraga. “The Influence of Advertising on Alcohol Consumption: A Literature Review and an Econometric Analysis of Four European Nations.” International Journal of Advertising 13 (1994): 287-310.

Cameron, Sam. “Estimation of the Demand for Cigarettes: A Review of the Literature.” Economic Issues 3 (1998): 51-72.

Chaloupka, Frank J. and Kenneth E. Warner. “The Economics of Smoking.” In The Handbook of Health Economics, vol. 1B, edited by A.J. Culyer and J.P. Newhouse, 1539-1627. New York: Elsevier, 2000.

Chandler, Alfred D. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: Belknap Press, 1977.

Clarkson, Kenneth W. and Timothy J. Muris, eds. The Federal Trade Commission since 1970: Economic Regulation and Bureaucratic Behavior. Cambridge: Cambridge University Press, 1981.

Cook, Philip J. and Michael J. Moore. “Alcohol.” In The Handbook of Health Economics, vol. 1B, edited by A.J. Culyer and J.P. Newhouse, 1629-73. Amsterdam: Elsevier, 2000.

Cook, Philip J. and Michael J. Moore. “Environment and Persistence in Youthful Drinking Patterns.” In Risky Behavior Among Youths: An Economic Analysis, edited by J. Gruber, 375-437. Chicago: University of Chicago Press, 2001.

Coulson, N. Edward, John R. Moran, and Jon P. Nelson. “The Long-Run Demand for Alcoholic Beverages and the Advertising Debate: A Cointegration Analysis.” In Advertising and Differentiated Products, vol. 10, edited by M.R. Baye and J.P. Nelson, 31-54. Amsterdam: JAI Press, 2001.

Dekimpe, Marnick G. and Dominique Hanssens. “Empirical Generalizations about Market Evolution and Stationarity.” Marketing Science 14 (1995): G109-21.

Dixit, Avinash and Victor Norman. “Advertising and Welfare.” Bell Journal of Economics 9 (1978): 1-17.

Duffy, Martyn. “Advertising in Demand Systems for Alcoholic Drinks and Tobacco: A Comparative Study.” Journal of Policy Modeling 17 (1995): 557-77.

Duffy, Martyn. “Econometric Studies of Advertising, Advertising Restrictions and Cigarette Demand: A Survey.” International Journal of Advertising 15 (1996): 1-23.

Duffy, Martyn. “Advertising in Consumer Allocation Models: Choice of Functional Form.” Applied Economics 33 (2001): 437-56.

Federal Trade Commission. Staff Report on the Cigarette Advertising Investigation. Washington, DC: FTC, 1981.

Forey, Barbara, et al., eds. International Smoking Statistics, 2nd ed. London: Oxford University Press, 2002.

Franke, George R. “U.S. Cigarette Demand, 1961-1990: Econometric Issues, Evidence, and Implications.” Journal of Business Research 30 (1994): 33-41.

Fritschler, A. Lee and James M. Hoefler. Smoking and Politics: Policy Making and the Federal Bureaucracy, 5th ed. Upper Saddle River, NJ: Prentice-Hall, 1996.

Gallet, Craig A. “The Effect of the 1971 Advertising Ban on Behavior in the Cigarette Industry.” Managerial and Decision Economics 20 (1999): 299-303.

Gius, Mark P. “Using Panel Data to Determine the Effect of Advertising on Brand-Level Distilled Spirits Sales.” Journal of Studies on Alcohol 57 (1996): 73-76.

Goff, Brian and Gary Anderson. “The Political Economy of Prohibition in the United States, 1919-1933.” Social Science Quarterly 75 (1994): 270-83.

Hasin, Bernice R. Consumers, Commissions, and Congress: Law, Theory and the Federal Trade Commission, 1968-1985. New Brunswick, NJ: Transaction Books, 1987.

Hazlett, Thomas W. “The Fairness Doctrine and the First Amendment.” The Public Interest 96 (1989): 103-16.

Hoadley, John F., Beth C. Fuchs, and Harold D. Holder. “The Effect of Alcohol Beverage Restrictions on Consumption: A 25-year Longitudinal Analysis.” American Journal of Drug and Alcohol Abuse 10 (1984): 375-401.

Ippolito, Richard A., R. Dennis Murphy, and Donald Sant. Staff Report on Consumer Responses to Cigarette Health Information. Washington, DC: Federal Trade Commission, 1979.

Joint Report of the Study Group on Smoking and Health. “Smoking and Health.” Science 125 (1957): 1129-33.

Kao, Kai and Victor J. Tremblay. “Cigarette ‘Health Scare,’ Excise Taxes, and Advertising Ban: Comment.” Southern Economic Journal 54 (1988): 770-76.

Kwoka, John E. “Advertising and the Price and Quality of Optometric Services.” American Economic Review 74 (1984): 211-16.

Lancaster, Kent M. and Alyse R. Lancaster. “The Economics of Tobacco Advertising: Spending, Demand, and the Effects of Bans.” International Journal of Advertising 22 (2003): 41-65.

Lariviere, Eric, Bruno Larue, and Jim Chalfant. “Modeling the Demand for Alcoholic Beverages and Advertising Specifications.” Agricultural Economics 22 (2000): 147-62.

Laugesen, Murray and Chris Meads. “Tobacco Advertising Restrictions, Price, Income and Tobacco Consumption in OECD Countries, 1960-1986.” British Journal of Addiction 86 (1991): 1343-54.

Lee, Byunglak and Victor J. Tremblay. “Advertising and the US Market Demand for Beer.” Applied Economics 24 (1992): 69-76.

McGahan, A.M. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-1958.” Business History Review 65 (1991): 229-84.

McGahan, A.M. “Cooperation in Prices and Advertising: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-59.

Miller, James C. The Economist as Reformer: Revamping the FTC, 1981-1985. Washington, DC: American Enterprise Institute, 1989.

Munger, Michael and Thomas Schaller. “The Prohibition-Repeal Amendments: A Natural Experiment in Interest Group Influence.” Public Choice 90 (1997): 139-63.

Nelson, Jon P. “State Monopolies and Alcoholic Beverage Consumption.” Journal of Regulatory Economics 2 (1990): 83-98.

Nelson, Jon P. “Broadcast Advertising and U.S. Demand for Alcoholic Beverages.” Southern Economic Journal 66 (1999): 774-90.

Nelson, Jon P. “Alcohol Advertising and Advertising Bans: A Survey of Research Methods, Results, and Policy Implications.” In Advertising and Differentiated Products, vol. 10, edited by M.R. Baye and J.P. Nelson, 239-95. Amsterdam: JAI Press, 2001.

Nelson, Jon P. “Advertising Bans, Monopoly, and Alcohol Demand: Testing for Substitution Effects Using State Panel Data.” Review of Industrial Organization 22 (2003a): 1-25.

Nelson, Jon P. “Cigarette Demand, Structural Change, and Advertising Bans: International Evidence, 1970-1995.” Contributions to Economic Analysis & Policy 2 (2003b): 1-28. http://www.bepress.com/bejeap/contributions (electronic journal).

Nelson, Jon P. and Douglas J. Young. “Do Advertising Bans Work? An International Comparison.” International Journal of Advertising 20 (2001): 273-96.

Nelson, Phillip. “The Economic Consequences of Advertising.” Journal of Business 48 (1975): 213-41.

Neuberger, Maurine B. Smoke Screen: Tobacco and the Public Welfare. Englewood Cliffs, NJ: Prentice-Hall, 1963.

O’Neill, John E. “Federal Activity in Alcoholic Beverage Control.” Law and Contemporary Problems 7 (1940): 570-99.

Ornstein, Stanley O. and Dominique M. Hanssens. “Alcohol Control Laws and the Consumption of Distilled Spirits and Beer.” Journal of Consumer Research 12 (1985): 200-13.

Packard, Vance O. The Hidden Persuaders. New York: McKay, 1957.

Pearl, Raymond. “Tobacco Smoking and Longevity.” Science 87 (1938): 216-17.

Pope, Daniel. The Making of Modern Advertising. New York: Basic Books, 1983.

Posner, Richard A. “The Federal Trade Commission.” University of Chicago Law Review 37 (1969): 47-89.

Posner, Richard A. Regulation of Advertising by the FTC. Washington, DC: AEI, 1973.

“Does Tobacco Harm the Human Body?” (by I. Fisher). Reader’s Digest (Nov. 1924): 435. “Nicotine Knockout, or the Slow Count” (by G. Tunney). Reader’s Digest (Dec. 1941): 21. “Cancer by the Carton” (by R. Norr). Reader’s Digest (Dec. 1952): 7.

Richardson, Gary. “Brand Names before the Industrial Revolution.” Unpub. working paper, Department of Economics, University of California at Irvine, 2000.

Rogers, Stuart. “How a Publicity Blitz Created the Myth of Subliminal Advertising.” Public Relations Quarterly 37 (1992): 12-17.

Russell, Wallace A. “Controls Over Labeling and Advertising of Alcoholic Beverages.” Law and ContemporaryProblems 7 (1940): 645-64.

Saffer, Henry. “Alcohol Advertising Bans and Alcohol Abuse: An International Perspective.” Journal of Health Economics 10 (1991): 65-79.

Saffer, Henry. “Advertising under the Influence.” In Economics and the Prevention of Alcohol-Related Problems, edited by M.E. Hilton, 125-40. Washington, DC: National Institute on Alcohol Abuse and Alcoholism, 1993.

Saffer, Henry and Frank Chaloupka. “The Effect of Tobacco Advertising Bans on Tobacco Consumption.” Journal of Health Economics 19 (2000): 1117-37.

Saffer, Henry and Dhaval Dave. “Alcohol Consumption and Alcohol Advertising Bans.” Applied Economics 34 (2002): 1325-34.

Scherer, F. M. and David Ross. Industrial Market Structure and Economic Performance. 3rd ed. Boston: Houghton Mifflin, 1990.

Schweitzer, Stuart O., Michael D. Intriligator, and Hossein Salehi. “Alcoholism.” In Economics and Alcohol: Consumption and Controls, edited by M. Grant, M. Plant, and A. Williams, 107-22. New York: Harwood, 1983.

Sethuraman, Raj and Gerard J. Tellis. “An Analysis of the Tradeoff Between Advertising and Price Discounting.” Journal of Marketing Research 28 (1991): 160-74.

Shipman, George A. “State Administrative Machinery for Liquor Control.” Law and Contemporary Problems 7 (1940): 600-20.

Simmons, Steven J. The Fairness Doctrine and the Media. Berkeley, CA: University of California Press, 1978.

Simon, Julian L. Issues in the Economics of Advertising. Urbana, IL: University of Illinois Press, 1970.

Simon, Julian L. and John Arndt. “The Shape of the Advertising Response Function.” Journal of Advertising Research 20 (1980): 11-28.

Simonich, William L. Government Antismoking Policies. New York: Peter Lang, 1991.

Stewart, Michael J. “The Effect on Tobacco Consumption of Advertising Bans in OECD Countries.” International Journal of Advertising 12 (1993): 155-80.

Stigler, George J. “The Economics of Information.” Journal of Political Economy 69 (1961): 213-25.

Stone, Alan. Economic Regulation and the Public Interest: The Federal Trade Commission in Theory and Practice. Ithaca, NY: Cornell University Press, 1977.

Strumpf, Koleman S. and Felix Oberholzer-Gee. “Local Liquor Control from 1934 to 1970.” In Public Choice Interpretations of American Economic History, edited by J.C. Heckelman, J.C. Moorhouse, and R.M. Whaples, 425-45. Boston: Kluwer Academic, 2000.

Tellis, Gerard J. Effective Advertising: Understanding When, How, and Why Advertising Works. Thousand Oaks, CA: Sage, 2004.

Tennant, Richard B. The American Cigarette Industry. New Haven, CT: Yale University Press, 1950.

“Beyond Any Doubt.” Time (Nov. 30, 1953): 60.

U.S. Congress. Senate. To Prohibit the Advertising of Alcoholic Beverages by Radio. Hearings before the Subcommittee on S. 517. 76th Congress, 1st Session. Washington, DC: U.S. Government Printing Office, 1939.

U.S. Congress. Senate. Liquor Advertising Over Radio and Television. Hearings on S. 2444. 88th Congress, 2nd Session. Washington, DC: U.S. Government Printing Office, 1952.

U.S. Public Health Service. Smoking and Health. Report of the Advisory Committee to the Surgeon General of the Public Health Service. Washington, DC: U.S. Department of Health, Education, and Welfare, 1964.

U.S. Public Health Service. Surveillance for Selected Tobacco-Use Behaviors — United States, 1900-1994. Atlanta: U.S. Department of Health and Human Services, 1994.

U.S. Public Health Service. Reducing Tobacco Use. A Report of the Surgeon General. Atlanta: U.S. Department of Health and Human Services, 2000.

Vallee, Bert L. “Alcohol in the Western World.” Scientific American 278 (1998): 80-85.

Wilkinson, James T. “Alcohol and Accidents: An Economic Approach to Drunk Driving.” Ph.D. diss., Vanderbilt University, 1985.

Wilkinson, James T. “The Effects of Regulation on the Demand for Alcohol.” Unpub. working paper, Department of Economics, University of Missouri, 1987.

Young, Douglas J. “Alcohol Advertising Bans and Alcohol Abuse: Comment.” Journal of Health Economics 12 (1993): 213-28.

Endnotes

1. See, for example, Packer Corp. v. Utah, 285 U.S. 105 (1932); Breard v. Alexandria, 341 U.S. 622 (1951); E.F. Drew v. FTC, 235 F.2d 735 (1956), cert. denied, 352 U.S. 969 (1957).

2. In 1963, the Federal Communications Commission (FCC) notified broadcast stations that they would be required to give “fair coverage” to controversial public issues (40 FCC 571). The Fairness Doctrine ruling was upheld by the Supreme Court in Red Lion Broadcasting (1969). At the request of John Banzhaf, the FCC in 1967 applied the Fairness Doctrine to cigarette advertising (8 FCC 2d 381). The FCC opined that the cigarette advertising was a “unique situation” and extension to other products “would be rare,” but Commissioner Loevinger warned that the FCC would have difficulty distinguishing cigarettes from other products (9 FCC 2d 921). The FCC’s ruling was upheld by the D.C. Circuit Court, which argued that First Amendment rights were not violated because advertising was “marginal speech” (405 F.2d 1082). During the period 1967-70, broadcasters were required to include free antismoking messages as part of their programming. In February 1969, the FCC issued a notice of proposed rulemaking to ban broadcast advertising of cigarettes, absent voluntary action by cigarette producers (16 FCC 2d 284). In December 1969, Congress passed the Smoking Act of 1969, which contained the broadcast ban (effective Jan. 1, 1971). With regard to the Fairness Doctrine, Commissioner Loevinger’s “slippery slope” fears were soon realized. During 1969-1974, the FCC received thousands of petitions for free counter-advertising for diverse products, such as nuclear power, Alaskan oil development, gasoline additives, strip mining, electric power rates, clearcutting of forests, phosphate-based detergents, trash compactors, military recruitment, children’s toys, airbags, snowmobiles, toothpaste tubes, pet food, and the United Way. In 1974, the FCC began an inquiry into the Fairness Doctrine, which concluded that “standard product commercials, such as the old cigarette ads, make no meaningful contribution toward informing the public on any side of an issue . . . the precedent is not at all in keeping with the basic purposes of the fairness doctrine” (48 FCC 2d 1, at 24). After numerous inquires and considerations, the FCC finally announced in 1987 that the Fairness Doctrine had a “chilling effect,” on speech generally, and could no longer be sustained as an effective public policy (2 FCC Rcd 5043). Thus ended the FCC’s experiment with regulatory enforcement of a “right to be heard” (Hazlett 1989; Simmons 1978).

3. During the remainder of the 1970s, the FTC concentrated on enforcement of its advertising regulations. It issued consent orders for unfair and deceptive advertising to force companies to include health warnings “clearly and conspicuously in all cigarette advertising.” It required 260 newspapers and 40 magazines to submit information on cigarette advertisements, and established a task force with the Department of Health, Education and Welfare to determine if newspaper ads were deceptive. In 1976, the FTC announced that it was again investigating “whether there may be deception and unfairness in the advertising and promotion of cigarettes.” It subpoenaed documents from 28 cigarette manufacturers, advertising agencies, and other organizations, including copy tests, consumer surveys, and marketing plans. Five years later, it submitted to Congress the results of this investigation in its Staff Report on Cigarette Investigation (FTC 1981). The report proposed a system of stronger rotating warnings and covered issues that had emerged regarding low-tar cigarettes, including compensatory behaviors by smokers and the adequacy of the FTC’s Test Method for determining tar and nicotine content. In 1984, President Reagan signed the Comprehensive Smoking Education Act (P.L. 98-474, effective Oct.12, 1985), which required four rotating health warnings for packages and advertising. Also, in 1984, the FTC revised its definition of deceptive advertising (103 FTC 110). In 2000, the FTC finally acknowledged the shortcoming of its tar and nicotine test method.

4. The Food and Drug Administration (FDA) has jurisdiction over cigarettes as drugs in cases involving health claims for tobacco, additives, and smoking devices. Under Dr. David Kessler, the FDA in 1996 unsuccessfully attempted to regulate all cigarettes as addictive drugs and impose advertising and other restrictions designed to reduce the appeal and use of tobacco by children (notice, 60 Fed Reg 41313, Aug. 11, 1995; final rule, 61 Fed Reg 44395, Aug. 28, 1996); vacated by FDA v. Brown & Williamson Tobacco Corporation, et al., 529 U.S. 120 (2000)

Citation: Nelson, Jon. “Advertising Bans, US”. EH.Net Encyclopedia, edited by Robert Whaples. May 20, 2004. URL http://eh.net/encyclopedia/nelson-adbans/

Debt and Slavery in the Mediterranean and Atlantic Worlds

Reviewer(s):Engerman, Stanley L.

Published by EH.Net (October 2013)

Gwyn Campbell and Alessandro Stanziani, editors, Debt and Slavery in the Mediterranean and Atlantic Worlds. London: Pickering & Chatto, 2013. xiv + 185 pp. $99 (hardcover), ISBN: 978-1-84893-374-3.

Reviewed for EH.Net by Stanley L. Engerman, Department of Economics, University of Rochester.

Debt and Slavery in the Mediterranean and Atlantic Worlds contains nine essays plus a long introduction by the co-editors, dealing with topics related to the importance of debt in leading to enslavement in many places over a long period of time. The period covered ranges from about 300 B.C. (Early Rome) to 1956 (Anglo-Egyptian Sudan), and covers various nations of the world in Europe, Asia, and Africa.

The editors introduction discusses the various types of slavery and the meaning of enslavement. While they consider most slaves to be the result of wartime capture, they point to the relative importance (though few numerical estimates are given) of slavery resulting from the failure to pay debt in full, which permits the creditor to enslave the debtor, presumably for life. They note the occasional practice of self-enslavement for debt and sale of children (p. 13), but little attention is given to its major role in times of subsistence crises. They distinguish, as do several of the authors, between pawnship (the provision of collateral for loans) and debt slavery, although they indicate that these categories are often difficult to distinguish ? and while pawnship may lead to slavery in some times and places, at other times and places it does not.

Marc Kleijwegt’s chapter on early Rome focuses on Moses Finley’s contention that chattel slavery began in Rome only after 326 B.C., with the abolition of the nexum as a form of temporary bondage, requiring its replacement by a different form of coerced labor. Kleijwegt argues, against Finley, that chattel slavery in Rome had begun earlier, and that debt enslavement did not end in 326 B.C., so that while some aspects of the arguments made by Finley did take place, these changes were less dramatic and sharp than Finley argued, and that this complicates the belief in an abrupt transition from debt bondage to chattel slavery? (p. 37).

In the most wide-ranging essay in terms of time and location, Alessandro Stanziani deals with enslavement for debt and by war captivity in several Mediterranean and Central Asian states as well as in Russia, China, and India. In some cases these were suppliers of slaves, and in others users of slaves.? In most cases, although debt slavery was important, war captives played a dominant role (p. 48), reflecting the political instability and military operations that characterized these areas.

Michael Ferguson details the Ottoman Empire state-initiated emancipations, mainly of African slaves from the third quarter of the nineteenth century. These may have been a minority of emancipations, but state-initiated emancipation generally led to keeping ex-slaves under state protection, where they often served in the military, or performed agricultural work. Two essays on debt slavery and pawnship, by Paul Lovejoy in West Africa and Olatunji Ojo on the Yoruba, focus on the distinctions and similarities between pawnship and slavery. Pawnship, a form of providing an individual as security for debt, did not necessarily lead to slavery, although there were important legal changes over time and its conditions varied from place to place. In West Africa, as elsewhere, most slaves were the result of violence, including kidnapping, not debt. The same was apparently the case among the Yoruba, where many slaves were also the result of violence, not debt. Most pawns who were to become slaves were women and children, “whereas adult males” were more likely to be taken in combat? (p. 90).

In an update of his classic article of some forty years ago, in “The Africanization of the Work Force in English America,” Russell R. Menard analyzes the transition from the debts entered into by indentured labor, mainly from England, to the growth in the importance of African slaves in the colonial Chesapeake and in Barbados. Based on the detailed work of Lorena Walsh and John C. Coombs in pointing to the differences in the types of tobacco produced in different parts of the Chesapeake, Menard argues for a shift in chronology and explanation from his earlier arguments. In regard to Barbados, he argues that the transition to slavery had begun prior to the sugar revolution, based on other export crops, although sugar greatly accelerated the growth in slavery.

In an attempt to link the development of commerce and credit in various parts of Europe, the Americas, and Africa to the role of slavery and the slave trade, Joseph Miller describes the role of European states and merchants in obtaining and shifting specie and funds in trading with Africa and elsewhere.? While this commercialization did benefit the Europeans, he argues that its effect upon African societies and economies was negative, leading to more militarization and the need to provide slaves to pay for the debts accumulated.

Henrique Espada Lima presents a detailed examination of various forms of coerced labor in Brazil in the nineteenth century, including some labor based on voluntary immigration from Portugal and the Azores.? There were provisions made for self-purchase by slaves, making for a conversion of slavery into debt, and thus having slaves pay financial compensation to their former owners (p. 131). Slavery finally ended in Brazil in 1888, 17 years after passage of the so-called law of the free womb, with no compensation paid to either slaves or slave owners. According to Steven Serels, it was debt, not taxation, which led to the increased labor force participation in cotton production in the Anglo-Egyptian Sudan between 1898 and the coming of independence in 1956. This debt influenced both laborers and tenants, and bound these cultivators to the land and prevented them from regaining their lost independence? (p. 142) over the first half of the twentieth century.

All the essays are based upon extensive primary and secondary research, are clearly presented, and are quite useful additions to understanding the historical meaning of slavery, serfdom, pawnship, and different forms of coerced labor. As with such a diverse set of essays, there are differences in the caliber of the argument and in the authors? perceived importance of the role of debt slavery in different times and places. Nevertheless, the great value of this collection is to indicate the widespread frequency and social importance of this particular form of enslavement.

Stanley Engerman is co-author (with Kenneth Sokoloff) of Economic Development in the Americas since 1500: Endowments and Institutions, Cambridge University Press, 2012.

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (October 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Servitude and Slavery
Geographic Area(s):General, International, or Comparative
Time Period(s):General or Comparative

The Development of American Finance

Author(s):Konings, Martijn
Reviewer(s):Redenius, Scott A.

Published by EH.Net (January 2013)

Martijn Konings, The Development of American Finance.? New York: Cambridge University Press, 2011.? xii + 199 pp.? $90 (hardback), ISBN: 978-0-521-19525-6.

Reviewed for EH.Net by Scott A. Redenius, Department of Economics, Brandeis University.

As the blurb on the inside cover notes, the decline of the U.S.-led international financial order has been long predicted.? Yet, despite financial crises and the buildup of debt, the U.S. state retains significant financial flexibility and international influence.? In The Development of American Finance, Martijn Konings, Lecturer in Political Economy at the University of Sydney, looks to U.S. financial history to better understand the nature and origins of this financial order and the position of the U.S. within it.? Starting with the colonial period, Konings describes how U.S. economic conditions, business practices, and politics reshaped transplanted British financial institutions to produce a more dynamic and innovative financial system that has aggressively broadened access to credit.? Since World War II, U.S. financial practices and institutions have spread globally and bolstered the country?s position within the global financial system.

The book is pitched to an international political economy (IPE) audience.? For this audience, Konings offers methodological critiques and distinctions such as that between the U.S. and British versions of Anglo-Saxon finance.? These are used to advance Konings? larger goal: to replace central parts of the current IPE narrative with alternative interpretations that better fit the historical evidence.? That said, the book has much to offer a broader audience.? Most economic historians will be interested in Konings? revised narrative, and his account draws heavily on the work of contemporary economic writers and political and economic historians.? For this broader audience, the book provides an insightful and useful survey of the evolution of the U.S. financial system with a strong emphasis on its international connections.

Konings lays out his thesis and general methodological approach in the introductory chapter.? Chapters 2 through 4 focus on some of the factors that led U.S. finance to evolve away from British practice, including greater demand for agricultural credit, political fragmentation, and political pressure for decentralization.? What emerged was a distinct financial system in which credit was extended on the basis of reputation, not just trade collateral; financial resources were centralized through the correspondent system rather than branch networks; and the call money market, which linked banks and the stock market, assumed the role of the bill market as an outlet for short-term funds.? However, these features, combined with the lack of a central bank, also made the U.S. financial system prone to liquidity crises.

The middle chapters of the book shift between domestic and international financial developments.? Chapter 5 deals primarily with the creation of the Federal Reserve System, and Chapter 7 with the New Deal financial reforms.? Here, Konings argues that the usual interpretations of these reforms are incomplete.? While they did seek to reign in financial excesses, the reforms aided rather than slowed the process of financial expansion ? the postwar portion of this expansion is discussed in Chapter 9 ? by putting in place a government safety net for the financial system and promoting financial innovation.? For example, New Deal financial reforms set the stage for future growth in the residential mortgage market by introducing securitization and making amortization standard for mortgage loans.

Chapters 6 and 8 consider international developments.? Chapter 6 takes aim at the theories of hegemonic succession that blame the U.S. for failing to take the lead in supporting the international system during the interwar period.? Konings points out that there is no reason to expect hegemonic succession to proceed in the manner suggested by the theory.? Britain continued to serve as a major entrepot and therefore, despite its relative decline, still had strong international interests.? By contrast, U.S. interests remained primarily domestic given its limited foreign trade and international financial connections.? This changed with the creation of the Bretton Woods system (Chapter 8), which solidified the dollar?s role as a reserve currency.? While many early IPE scholars identified Bretton Woods as the apogee of U.S. financial power, Konings sees it merely as a step in the expansion of U.S. influence.? The later decision to abandon the system was not a sign of U.S. weakness but a move that eliminated policy constraints without compromising the country?s dominant international position.

The remaining chapters integrate domestic and international developments.? Chapter 10 looks at the Fed?s difficulty in controlling inflation in the face of regulatory arbitrage and the growing Eurodollar market.? Chapters 11 and 12 examine disinflation, neoliberalism, and financial crises.? In Konings? view, the continued fiscal flexibility of the U.S. during and after the 2007-2008 financial crisis suggests that its financial power remains intact and has many years still to run.? However, the rise in household indebtedness in the lead up to the crisis and subsequent deleveraging suggest the financial deepening that has been a hallmark of U.S. finance has reached or surpassed its limit.

The book has many strengths.? Konings provides a skilled synthesis of a wide range of secondary sources and is adept at identifying contrary evidence and logical inconsistencies in existing interpretations.? Most economic historians will find the treatment of neoliberalism and financial crises of less interest than the earlier parts of the book.? Here, the presentation shifts to a more general level as Konings focuses on their implications for the IPE narrative.? Financial historians will also have some quibbles.? There are a few points in the early chapters where direct familiarity with primary sources would have been helpful, and the citations could do a better job of pointing readers to the most relevant sources listed in the bibliography.

It is always interesting to read a financial history written by someone in another field.? It provides a welcome opportunity to get a different perspective and make broader connections.? I am always looking for sources that will better organize my existing knowledge or place it in a larger context, and this book did that for me.? I expect other readers will find this true as well.

Scott A. Redenius is Senior Lecturer in Economics at Brandeis University.? His current research focuses on antebellum branch banking systems and on the evolution of antebellum payment networks in the U.S.
?
Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (January 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):North America
Time Period(s):18th Century
19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

Forms of Enterprise in 20th Century Italy: Boundaries, Structures and Strategies

Author(s):Colli, Andrea
Vasta, Michelangelo
Reviewer(s):Barbiellini Amidei, Federico

Published by EH.Net (July 2012)

Andrea Colli and Michelangelo Vasta, editors, Forms of Enterprise in 20th Century Italy: Boundaries, Structures and Strategies. Cheltenham, UK: Edward Elgar, 2010. xii + 327 pp. $147 (hardcover), ISBN: 978-1-84720-383-0.

Reviewed for EH.Net by Federico Barbiellini Amidei, Banca d?Italia.

While it is well established in the economics literature that technological change is a key ingredient for fostering growth, there is no consensus among scholars concerning the different capability of countries in exploiting the successive waves of innovation that, since the first Industrial Revolution, have marked modern economies. Italy, in this respect, represents an ideal case study: starting from an agricultural-based economy in the nineteenth century, it undertook a distinctive industrialization process and registered unprecedented output and total factor productivity growth in the second half of the twentieth century, which made it one of the richest countries in the world ? although it has recently been hard-hit by a stagnation and negative productivity growth phase. In this book Andrea Colli (Department of Institutional Analysis and Public Management, Bocconi University) and Michelangelo Vasta (Department of Economics, University of Siena), supported by the valuable contributions of other distinguished economic historians, turn to the ambitious task of offering the reader a broad and comprehensive reconstruction of the evolution of the Italian productive system across its different economic phases characterizing the twentieth century. By extending Chandler?s classic micro- business history-based perspective, focused on large corporations, to the rich variety of business forms contributing to Italy?s wealth, the authors build a conceptual framework in which they distinguish both the common features as well as the peculiarities of Italy?s economic development with respect to those experienced by other leading economies. The task is challenging and the detailed introduction by the editors clearly shows that the different approaches followed by economic historians in this field are still far from being reconciled. Colli and Vasta ? considering the standard set of characteristics such as size, legal form, performance, and type of governance and ownership ? identify seven different relevant typologies of enterprise for Italy: big business, State Owned Enterprises (SOE), foreign-controlled companies, small firms, medium-sized firms, municipalized firms and cooperatives.

The first part of the book focuses on large companies, showing how, depending on switches in technological regimes, their importance for Italy?s economic growth changed over time in relation to the Italian delay in the diffusion of new technologies (Giannetti and Vasta) and documenting the relative weight of the different types of corporate ownership and financing structures (Conte and Piluso).? As neatly stated in the Foreword by Franco Amatori, ?big business ? was the engine of growth especially in the phases of more intensive growth.? At the same time, according to the editors, ?strong turbulence is a dominant feature of Italian big business, both in manufacturing and [especially] in the service sector? (p. 11), i.e. Italian big corporations were often unable to consolidate their position after having successfully joined the top 200, due mainly to the impact of new technological waves in the case of manufacturing, and to the impact of major institutional changes induced by the State ? in particular a sequence of nationalization and privatization processes ? in the case of the service sector. The role of family-owned companies is also discussed, even if the relevance of this type of enterprise for the Italian economy?s long-term competitive performance does not emerge distinctly enough in this first section of the book. In two separate essays, the crucial role of State intervention is measured ? this deserves to be highlighted ? and assessed, both as a direct supplier of products and services (Toninelli and Vasta), and as an enhancing mechanism for the development and consolidation of private-owned Italian corporations, especially through sound international and domestic technological transfer promoting economic and industrial policies in the post-WWII phase (Fauri). Interestingly while the European Recovery Program (ERP) loans accrued mostly to Italian big business to buy modern U.S. machinery, the Italian government also ?passed specific financing laws for the SMEs? (p. 125) and made possible the purchasing of domestically-produced machinery with ERP (counterpart) funds. Andrea Colli?s essay on foreign-controlled firms as a crucial actor for Italy?s developmental path is particularly innovative and rewarding. Via a quantitative investigation, foreign capital, invested in high-tech and capital-intensive industries, emerges as constantly relevant in the country?s industrialization process, in particular for its crucial contribution in transferring technologies to the indigenous industrial fabric in the 1950s-60s, thanks to a ?more friendly governmental attitude towards foreign investments? and a new legislation on foreign direct investments (p. 102). Considering that in the early 1960s 80 percent of Italian stock market capitalization pertained to enterprises belonging to one of the eight main industrial groups and that half of the 200 main industrial firms belonged to a group[1], additional research could be fruitfully devoted to ?not independent firms? ? to the measurement and assessment of the nature and consequences of the affiliation of many Italian big, medium and small firms to private (often family-controlled) and public groups.

The second section of the book is dedicated to the study of small firms and local production systems. In particular, three essays discuss the evolution of industrial clusters (Perugini and Romei), municipalized firms (Fari and Giuntini), and artisanal firms (Longoni and Rinaldi). By using a mix of quantitative and qualitative methods, how these different forms of enterprise coped with changes in the economic and institutional environment, supported by public intervention, is clearly spelled out. Actually, one of the main points raised is that ?the Italian state played a central role in fostering the post Second World War advancement of SMEs? (p. 205), on a scale unparalleled in Europe in particular for artisanship/micro-firms. While only future research will allow us to evaluate the relative weight of state aid and its impact on the two entrepreneurial forms[2], the evidence provided here convincingly encourages a reconsideration of the ?traditional dichotomous view of the existence of large, state-supported enterprises on the one hand, and of small and Mancunian-like, not state supported enterprises on the other hand? (p. 14). The long period here covered by the authors ? 1900 to 1960/70 ? allows them to track and highlight the long-term nature of the Italian industrial districts? developmental path. This section?s historical analysis of industrial districts deserves careful attention from anyone interested in understanding the peculiar structure of Italian SMEs. It emerges from the volume, for example, that their success was historically often driven by international trade trends and trade liberalizations (while interestingly their crucial expansion following World War II, was driven by the virtuous association of export growth and internal market expansion).

The third section of the book represents a bridge between the two previous ones, as it explores with analytical details the dynamics of firms? size changes. The two essays (Castellucci and Giannetti, and Lavista) are focused on the tension faced by Italian firms between growing, consolidating, and downsizing. The crucial feature that emerges from the authors? long run analysis is the transitory condition of the Italian medium-size firm, with few exceptions, such as those representing the post-WWII ?Made in Italy? sectors (in an appropriate enlarged definition to include upstream mechanical suppliers of capital and intermediary goods to light consumer goods producers). Moreover, by looking more generally at the changes in firm size ? focusing on firms which expanded the most during the 1930s-1970s time span ? it appears that growth was fostered by market competition (the fastest growing firms were mostly active in sectors characterized by relatively lower barriers to entry) and that it was typically associated with technology-intensive sectors. Again a strong turbulence emerges as characterizing leap-frogging medium-sized firms, showing over the long run a high mortality rate in the period after the leap. The authors consistently challenge, for the post-1970s era, the traditional picture of an Italian business system characterized by a complete polarization between large and small (often very small) companies, highlighting in particular the emergence in recent decades of a new entrepreneurial form in Italian industrial demography: the medium-sized pocket multinational enterprise, described often as the protagonist of a new ?fourth [industrial] capitalism.? The challenge of identifying these firms capable of competing in globalized markets specializing in niches (by) maintaining a medium size ? often emerging from the entrepreneurial seedbed of industrial districts once exposed to the strains and opportunities of globalization ? and explaining their competitive positioning into an intermediate size category, calls for a new generation of business history studies, complementary to the newly provided statistical evidence.
?
The final section of the book consists of a single essay (Battilani and Zamagni) exploring a type of enterprise which is quite relevant for the Italian economy (almost 6 percent of total employment in 2001 ? much more than in other countries), which has expanded significantly in recent decades: the cooperative firm. It is interesting to notice that, as highlighted by the authors, the recent successes of Italian cooperatives came in large-scale service production ? an area of structural weakness for Italian private initiatives ? thanks to the gradual overcoming of financing constraints through access to a wider range of debt and (quasi-) risk capital, and to the formation of cooperative networks in charge of strategic coordination and common crucial business functions, rather than the still significant State support.??

The lesson we learn from this book is that there is no such thing as a free lunch in economic history; we cannot reduce the complexity of the interplay between private and public actors of the economy into a few, stand-alone elements. On the contrary, the book invites the reader to consider the interaction of the different forms of enterprise with local and national institutional changes, coupled with the opportunities offered by international trade, in order to understand the conditions that allowed (but sometimes prevented) the country to gain from the different processes of technological advancements developed during the twentieth century. The very rich variety of subjects discussed and the widespread use of quantitative information to corroborate the analyses, offer a unique opportunity to look at the evolution of the Italian economy from many different views, and, cross-checking and referencing the different essays, to draw stronger and broader conclusions out of the information contained in each of them. Echoing and paraphrasing Amatori?s foreword, this an important book as it represents: i) a successful attempt to combine structural, institutional and macroeconomic perspectives of economic history together with the microeconomic perspective of business history, through the unifying fabric of quantitative micro, meso and macro evidence, so as to maximize their specific strong points and overcome their specific weaknesses; and ii) a fruitful reconciliation ?of the two ?souls? of Italian business history,? the Chandlerian big business centered one and the ?Copernican? ?small businesses and non-heavy industrial sectors? based one, so as to produce a convincing eclectic new ?localized? synthesis. This two-fold innovative character of their research project allows the authors to tackle the challenge of re-writing the Italian chapter of the ?varieties of capitalism? story with useful new answers and intriguing new questions. In conclusion, since a historical perspective of Italian enterprises is extremely useful nowadays when it comes to discussions of the new role of State intervention, the strengths and weaknesses of the Italian productive system, the windows of opportunity offered to SMEs by the globalization process, etc., this book is greatly rewarding reading for anyone interested in deepening knowledge of the rise and the ongoing transformation of Italian capitalism.

Notes:

1. Only one quarter of these firms were listed on the stock exchange, as shown by Federico Barbiellini Amidei and Claudio Impenna (1999) ?Il mercato azionario e il finanziamento delle imprese negli anni Cinquanta,? in F. Cotula (ed.), Stabilit? e sviluppo negli anni Cinquanta. 3. Politica bancaria e struttura del sistema finanziario, Editori Laterza: Rome-Bari.

2. For example, the impact of 58 billion lire in preferential loans to artisanal firms granted in 1963 by Artigiancassa should be compared to the 14 trillion in total loans granted by the banking system or to the 6 trillion in loans granted by the medium/long-term special credit institutions in the same year. (These data come from a study in progress at our research unit.)

Federico Barbiellini Amidei is an Economist at Banca d?Italia, Economic Research Department, Economic and Financial History Unit. His main fields of interest are economics of innovation, Italian economic history, FDI and MNC development, corporate finance, and financial regulation in a historical perspective. His recent publications include The Dynamics of Knowledge Externalities. Localized Technological Change in? Italy, Edward Elgar, 2011 (with C. Antonelli); ?Innovation and Foreign Technology in Italy, 1861-2011,? Economic History Working Papers, 7, Rome: Bank of Italy, 2011 (with J. Cantwell and A. Spadavecchia); and ?Corporate Europe in the U.S.: Olivetti?s Acquisition of Underwood Fifty Years On,? Business History, 2012 (with A. Goldstein).

Copyright (c) 2012 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (July 2012). All EH.Net reviews are archived at http://www.eh.net/BookReview.

Subject(s):Business History
Geographic Area(s):Europe
Time Period(s):20th Century: Pre WWII
20th Century: WWII and post-WWII