EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Revealed Biodiversity: An Economic History of the Human Impact

Author(s):Jones, Eric L.
Reviewer(s):Kanazawa, Mark

Published by EH.Net (November 2014)

Eric L. Jones, Revealed Biodiversity: An Economic History of the Human Impact.  Singapore: World Scientific, 2014.  xxxiv + 257 pp. $99 (hardcover), ISBN: 978-981-4522-56-4.

Reviewed for EH.Net by Mark Kanazawa, Department of Economics, Carleton College.

Revealed Biodiversity is an ambitious attempt to tackle a big topic: the impact of human activity on long-term trends in biodiversity.  What makes this book of general interest to biologists and economists interested in the environment is the fact that there is a powerful prevailing wisdom out there — that ongoing economic growth is responsible for an inexorable, and perhaps universal, decline in non-human species, which is already being manifested in massive species extinctions and the likelihood of many more in the future.  The premise of the book is that an economic historian, taking a long-term approach, may be able to document and analyze trends in biodiversity to enable us to contextualize and critically evaluate a number of claims commonly made in public debates about biodiversity.  Properly done, such a book could be an important contribution to our understanding of the factors that influence biodiversity and the prospects for the future.

I need to say from the outset that I was prepared to be sympathetic to the arguments of this book.  As Eric Jones correctly points out, there are a great many simplistic, unfounded assertions about declining biodiversity out there.  The enduring notion that things were better in the good old days and that the earth is going to heck in a hand-basket often substitutes for critical thought.  Preservation at all costs is another persistent notion and economists, with their focus on tradeoffs and opportunity costs, are perhaps uniquely equipped to contribute productively to public debates.

Unfortunately, this book did not live up to the high expectationrs that this reader, for one, held out for it.  Consider, as the book does, wanting to make the following case.  Contrary to the beliefs of many (non-economists), policy regarding biodiversity involves tradeoffs — we simply cannot, nor should we necessarily want to, save everything.  Even if we did, economic development has highly complex and often unforeseeable impacts on wild populations, making it extremely challenging to evaluate its overall impact.  Sometimes its effect on certain populations is even positive.  Furthermore, it is likely that different forms of economic development at different points in time have widely varying impacts on wild populations.  Indeed, it is difficult to even know the basic facts concerning such fundamental questions as how much have wild populations declined over time, and how generalized has been the loss of species.  Answering these questions requires that we establish baselines from which to measure decline.  But baselines vary depending upon timeframe, and people are in general susceptible to believing that the appropriate baseline is “how things used to be.”  Furthermore, baselines are themselves very much a function of situational economic factors, which make it unclear whether you are measuring a trend or some cyclical fluctuation.  And anyway, the number of species out there is beside the point, as what is really important is our capacity to actually observe species in the wild (what the author refers to as revealed biodiversity).

What I like about this strategy is that it attempts to go beyond simplistic notions of biodiversity trends to take a more nuanced approach.  Instead of being saddled with asking the simplistic question — what is the human impact on non-human species — the question becomes under what conditions will the human impact be more (or less) adverse to non-human species?  Answering this latter question is likely to be much more useful for formulating practical policy regarding biodiversity.  I will add that I agree with Jones’ premise that much public debate about biodiversity is largely ahistorical, because many (most?) biologists and economists do not make explicit assumptions about exactly what they are measuring, over what period they are measuring, and from what starting point.  Here is a potentially fruitful area of inquiry for economic historians, in at least helping us to understand what data there is.

But herein lies an important interpretive point.  Regardless of the assumptions one makes, just about everyone would probably agree that the trends, whatever they turn out to be, have both secular and cyclical components.  The cyclical feature is evidenced by the recent (short-term?) recovery of some species from dangerously low levels, like the California condor, American bison, and the Minnesota grey wolf.  One way to interpret Jones is that he is focusing on the cyclical component when he argues that human impacts can cause some species (locally) to flourish.  But I suspect that taking the big picture outlook, many biologists and naturalists would not share the implied optimism about the future of biodiversity.  If the secular trend is inexorably downward, they might argue, what does it really matter that some species have enjoyed a temporary reprieve when, as Keynes might have put it, in the long run they are all dead?  And with ongoing economic growth, climate change, and the world population projected to increase to over nine billion by 2050, who can really doubt that without absolutely heroic measures, the secular declines are going to vastly dominate the cyclical fluctuations?

I need to emphasize that much of me appreciates and applauds the exercise that Jones went through.  But if secular dominates cyclical, then taking so much time and energy simply to document economic impacts on local baselines, over whatever period and under whatever economic conditions, seems somewhat misplaced.  It is a little bit like spending a lot of time identifying where the last peak of the business cycle occurred when the economy is slipping into a massive, sustained depression.  By the end of the book, I found myself believing Jones had spent too much time illustrating, and too little time systematically analyzing, the connection between economic activity and the health of local species.  As a result, it is not entirely clear what the practical take-away message is.  I am convinced that contained in the approach taken by the book are implications — important ones — for what sorts of policies to pursue, but the book itself provided me with too little sense for exactly what they are.

Perhaps it is best to view Revealed Biodiversity as a starting point for future studies of biodiversity that take seriously the mutual interaction between the economy and the surrounding environment.  In this respect, the book is in the best tradition of environmental histories by such authors as Patricia Limerick, William Cronin, Kathryn Morse, Mark Fiege and others, which examine the complex interplay between the economy, society, and the environment.  But future studies need to go much further in helping us understand the overall picture, the likely trends, and the nature of the human impact on non-human species.

Mark Kanazawa is professor of economics and former director of environmental studies at Carleton College in Northfield, MN.  His forthcoming book, Golden Rules, examines the origins of western water rights in the California Gold Rush.

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (November 2014). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Agriculture, Natural Resources, and Extractive Industries
Economywide Country Studies and Comparative History
Geographic Area(s):General, International, or Comparative
Time Period(s):17th Century
18th Century
19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

The Moral Background: An Inquiry into the History of Business Ethics

Author(s):Abend, Gabriel
Reviewer(s):Frey, Donald

Published by EH.Net (August 2014)

Gabriel Abend, The Moral Background: An Inquiry into the History of Business Ethics. Princeton: Princeton University Press, 2014. ix + 399 pp. $39.50 (cloth), ISBN: 978-0-691-15944-7.

Reviewed for EH.Net by Donald Frey, Department of Economics, Wake Forest University.

Gabriel Abend argues that a range of cultural beliefs and thought patterns provide an influential “moral background” as context for the more obvious everyday morality. Most of his book looks at business ethics during the period from the 1850s through the 1930s through the lens of the moral background concept.

Chapter 1 delineates the nature of a moral background: for instance, it defines what is properly considered a moral issue and what is not; what kinds of moral arguments are persuasive; and who are moral actors. A moral background may be shared by a whole culture, or specific sub-cultures. However, if “a common core of cultural accounts” (p. 36) is important for a moral background to exist, this would seem to require ideas common to enough people to define a true culture.  Applications to business ethics come in later chapters.

Abend’s second chapter is a preliminary look at ethics promoted by the business community itself — that is, business associations, captains of business, business schools, etc., in the 1850s-1930s era. It was a heavily utilitarian morality claiming: first, ethical business practices have a payoff; and second, business therefore should be conducted ethically. Abend carries out a careful and highly critical examination of varying versions of this logic. However, the biggest service he provides, in my opinion, is to reveal how depressingly pervasive this simplistic theme was among business moralists. Although Abend notes antecedents to this line, such as Benjamin Franklin’s advice to tradesmen, his major discussion of the moral background is held until Chapter 6.

An alternative Christian business ethic appears in Chapter 3. Protestant clergy of the era were highly critical of the ethics-pays approach, primarily because it involves impure motives. The Jewish-Christian scriptures endlessly affirm that God cares about motives of the heart. Acting ethically only in hopes of a reward is not a moral motive (even if the reward is cloaked in religiosity, such as a reward in an afterlife). Rather, a righteous heart responds to divine commands simply because they are from God. (Abend briefly notes less legalistic Christian motivations, such as love of God and neighbor, or walking in Christ’s ways.) In this chapter the “moral background” is clear because it was the high-profile Protestant morality of that period.

Chapters 4 and 5 deal further with the ethics of the business community itself — “Standards of Practice” ethics, in Abend’s nomenclature. Chapter 4 examines the early years of the Chamber of Commerce, when it used business ethics to deflect government regulation. Abend fills out the analysis with inferences about the Chamber’s implicit philosophical outlook. For example, its publications assigned Business (as distinct from lower-case businesses) an ontological status of its own; what this added to the analysis was unclear to me.

Chapter 5 addresses the emergence of graduate schools of business at major universities. The chapter ignores the substance of business ethics to discuss its rhetorical role in gaining faculty support for graduate programs of business.

Finally in chapters 6 and 7, Abend turns to his full-fledged analysis of the moral background of the two schools of business ethics — which he names Standards of Practice (the ethic of those closest to business itself) and Christian Merchant (the ethic of American Protestantism). His comparison of the two is summarized in a table. Although the table shows some overlap between the two ethical schools, I will emphasize the difference here. The fundamental question is: why be moral? Standards answers with the ethics-pays argument and the Christian Merchant answers with “because it is right,” (or for love of God and neighbor). The table notes that these answers mean Standards morality is “consequentialist,” and Merchant’s is mostly “deontological.” Further, says the table, the Standards school emphasizes doing, the Merchant school, being. The Standards school appeals to science (or claims to); the Merchant school appeals to Biblical and metaphysical arguments. Given this, the table concludes the Merchant school sees morality in absolute terms, while the Standards school tends to relativism, with exceptions. Surprisingly, despite such big differences, the work-a-day precepts of both schools (what is in the foreground, not the background) turn out to be very similar: service to clients, practicing the Golden Rule, and following professional norms (Standards school).

In this chapter, Abend asks, “where are the Standards of Practice and Christian Merchant types to be found? What are their social and organizational locations or roots…?” (p. 263). He easily locates the background of the Christian Merchant ethic, as previously noted. However, in my opinion, he does not convincingly locate the roots of the Standards school. Abend portrays the Standards morality as rooted in the views of the founders of the Standards school. But that is not a very deep background — it lacks clear links between the founders’ assertions and a more deeply rooted moral background common to a large part of society. For example, founders of the then-new business ethics assert emphatically that business and its ethics are science-based. But merely asserting this is far from demonstrating it to be so. Assertions do not demonstrate 1) the existence of a large enough science-oriented sub-culture in America at the time to serve as a moral-background that would resonate with many people; nor 2) how this scientific culture was meaningfully linked to business ethics, which would normally have affinities with the humanities.

Some founders of the Standards school also advocated moral relativism, but the chapter does not really demonstrate that moral relativism was widely prevalent in the culture of the time, and not even among scientists. Scientists may view some things in relative terms, but make universal claims about other things. Further, some influential social thinkers who claimed to find moral values in “science,” such as the Social Darwinists, were nothing if not absolutist in their views (consider William Graham Sumner). Abend seems to see these weaknesses for he ratchets down his goal for this chapter: his “aims are essentially typological” (pp. 263-64).

Despite this weak chapter, this book presents a thesis that I find credible and potentially enriching of the subject of business ethics. The author has superior familiarity with philosophy and the business ethics of the period he studies. His discussion of the Christian Merchant ethic shows a real understanding of a long history of Christian thought and practice, and its American variations (something not many social scientists seem to possess). Further, I don’t believe Chapter 6 needed to be weak. In my own work on economic moralists, something like a “moral background” appeared to be enlightening. My thesis was that economic moralities (yes, two competing moralities, just as Abend deals with two competing business ethics) drew support from alternative economic theories (again differing economic theories, just as Abend has different moral backgrounds). Perhaps economic theory is a much narrower kind of “moral background” than Abend envisions, but it is a reasonable proxy for a moral background. It is a distinct body of thought, often familiar — in one form or another — to much of the population. And economic theory can indeed support or undermine some kinds of moralities (for example, if economic outcomes are viewed as the efficient work of impersonal markets, moral concerns for equity are put on the defensive).  I think Abend might have described a convincing moral foundation in Chapter 6, perhaps by linking the Standards school to antecedents such as Benjamin Franklin (briefly noted in Chapter 2), and to ideas that were abroad in economics. Abend, I think, has a good concept, and is at least partially successful.

Donald E. Frey is author of America’s Economic Moralists: A History of Rival Ethics and Economics (SUNY Press, 2009)

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (August 2014). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Business History
History of Economic Thought; Methodology
Social and Cultural History, including Race, Ethnicity and Gender
Geographic Area(s):North America
Time Period(s):19th Century
20th Century: Pre WWII

Building Co-operation: A Business History of The Co-operative Group, 1863-2013

Author(s):Wilson, John F.
Webster, Anthony
Vorberg-Rugh, Rachael
Reviewer(s):Purvis, Martin

Published by EH.Net (May 2014)

John F. Wilson, Anthony Webster and Rachael Vorberg-Rugh, Building Co-operation: A Business History of The Co-operative Group, 1863-2013. Oxford: Oxford University Press, 2013. xv + 440 pp. £30/$45 (hardcover), ISBN: 978-0-19-965511-3.

Reviewed for EH.Net by Martin Purvis, School of Geography, University of Leeds.

The recent troubles of the Co-operative Group increase the interest of a welcome history of the Group and its predecessor, the Co-operative Wholesale Society (CWS). As the first comprehensive study of the CWS published since 1938, this handsomely-produced volume is an important contribution to the recent revival of academic interest in co-operation. But as an account of one of Britain’s largest and most distinctive economic institutions, the book is of wider relevance to business historians, and others interested in the economic, social and political development of modern Britain.

The book begins with a review of existing literature on the British co-operative movement and a reminder of the varied paths of co-operative development from the eighteenth century onwards. The latter is useful in putting the familiar story of the Rochdale Pioneers into context, and in highlighting early attempts to reinforce co-operative retailing with wholesale distribution. The remainder of the book is strictly chronological in structure, detailing the 150-year history of the family of co-operative organizations which began with the foundation of the North of England Co-operative Wholesale Society in 1863. The first four chapters focus on the foundation and expansion of the CWS from the 1860s to 1930s. In common with the rest of the book, this narrative benefits from unrivalled access to archival material in its account of the development of a business which grew from a regional wholesaling operation for north-west England to a major national and international enterprise. On the way it developed not only depots and warehouses across England and Wales, but also factories and farms producing a growing range of consumer requirements, including foodstuffs, clothing and furniture. Aspirations to protect British consumers from exploitation and to encourage co-operative development overseas led to direct CWS involvement in international trade and production, including ownership of tea plantations in India and Sri Lanka.

The authors argue that successful expansion of the wholesale society’s operations, which also included banking and insurance interests, gives the lie to previous criticism of co-operative managerial methods as inferior to those of private business. Their account is, however, honest in acknowledging the sometimes tense relations between the CWS and the retail societies that were its customers and its collective owners. The potential frustrations created by co-operative structures of governance is a theme which has contemporary resonances and comes to dominate the second part of the book which explores the challenges that the CWS, and co-operation in general, have faced in responding to societal and commercial change since the Second World War. As the account here explores, this period has seen co-operation face a sustained increase in commercial competition and periods of particular difficulty, including the attempted hostile take-over of the CWS. In response the CWS has been at the heart of efforts to consolidate co-operative wholesaling and retailing to create individually more viable units, and to reinterpret the movement’s ethics in ways that will resonate with twenty-first-century consumers.

The book was written before the Co-operative Group hit the headlines for all the wrong reasons, indeed its discussion of a recent co-operative renaissance may sound more optimistic than many now feel. But the chapters dealing with the evolution of the CWS since the 1960s and the series of mergers with retail co-operatives which created the Co-operative Group contain some valuable evidence of the flaws and fissures which have long weakened the co-operative movement. Drawing on both archival material and interviews with senior co-operative figures the book shows the difficulties faced when attempting to reconcile democratic decision-making by the membership with the radical changes in operational practice and managerial structures necessary to adapt to a changing business climate.

There is a danger, however, that in attending to co-operative managerial structures, reform plans and the personalities involved the latter chapters of this book do not pay as much attention to the actual trade of the CWS as some readers will expect. Indeed, throughout the book I would have liked to have known more about the wholesale society’s operations as a business. What exactly did it sell and to which retail societies? What were the most and least profitable elements of its business? How did it develop new products? How justified were criticisms of the quality and design of some of its products? What did post-war efforts to modernize co-operative production entail in practice? What happened to its overseas depots and tea plantations? Further questions are prompted by the book’s use of illustrations. All are well-chosen and nicely reproduced, but they stand somewhat in isolation from the text. Greater use might have been made of these illustrations to enrich discussion of, for example, the wholesale society’s activities as an advertiser, including pioneering use of promotional films; and its role during the mid-twentieth century in the design and construction of distinctively modern stores.

Arguably, too, the book’s detail can sometimes become too dense. Plentiful evidence is provided to support the book’s claim to explore the distinctive qualities of co-operative trading. The account is suitably balanced in acknowledging that the ‘co-operative difference’ is positive in its benefits for members, democracy and espousal of ethical good practice; but also problematic in the barriers sometimes placed in the path of reform by dysfunctional governance systems. A narrative account does, of course, reveal just how often co-operators have wrestled with substantially the same difficulties over recent decades. But the detail can sometimes get in the way of the reader’s understanding of the underlying issues. A more reflective, analytical tone is struck by the concluding chapter on the evolution of the co-operative business model; I would have welcomed more in this vein throughout the book as a whole. But it would be unfair to conclude on a negative comment; a book that leaves you wanting more has done a good job. It also highlights the availability of a substantial body of records held by the National Co-operative Archive in Manchester which will repay the continuing attention of researchers interested not just in business history, but in social, cultural and political life in Britain and elsewhere over the past two centuries.

Martin Purvis is currently engaged in a research project exploring the fortunes of British retailing, both private and co-operative, amid the economic changes and uncertainties of the interwar decades. m.c.purvis@leeds.ac.uk

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (May 2014). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Business History
Geographic Area(s):Europe
Time Period(s):19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

Commercial Agriculture, the Slave Trade and Slavery in Atlantic Africa

Editor(s):Law, Robin
Schwarz, Suzanne
Strickrodt, Silke
Reviewer(s):Engerman, Stanley L.

Published by EH.Net (January 2014)

Robin Law, Suzanne Schwarz and Silke Strickrodt, editors, Commercial Agriculture, the Slave Trade and Slavery in Atlantic Africa. Woodbridge, Suffolk: James Currey, 2013. xv + 272 pp. $90 (hardcover), ISBN: 978-1-84701-075-9.

Reviewed for EH.Net by Stanley L. Engerman, Departments of Economics and History, University of Rochester.

This collection of eleven papers plus a long introduction by the editors was derived from a 2010 conference held at the German Historical Institute London entitled “Commercial Agriculture in Africa as an Alternative to the Slave Trade.”  The papers deal with different centuries, as well as different parts of Africa, but almost all are concerned with what is called “legitimate commerce,” basically the promotion of commercial agriculture as contrasted with “trade in slaves” as well as trade in “non-agricultural commodities such as gold and ivory” (p. 2).

The development of legitimate commerce has been a long-time study of Robin Law (Emeritus Professor of African History at the University of Stirling), who has made many important contributions to the examination of this issue.  The additional co-editors are Susan Schwarz (Professor of History at the University of Worcester) and Silke Strickrodt (Research Fellow in Colonial History at the German Historical Institute London), both of whom have written on African and Atlantic history.  The introduction does an excellent job in laying out the arguments and describing the articles, and also provides an interesting running dialogue with another leading scholar of African economic history, Tony Hopkins.

There are two major issues discussed under the rubric of legitimate commerce.  One is the production of provisions and foodstuffs in Africa during the period of the slave trade and afterwards.  Second is the long-standing attempts by the various European powers to increase the labor supply involved in producing other agricultural commodities, reducing the labor supply involved with the slave trade, and thus providing an economically-induced ending to the slave trade and generating high incomes for African natives.  One paper that differs in topic from the others is Gerhard Seibert’s examination of the early history of the Portuguese settlement of São Tomé and Príncipe, “the first plantation economy in the tropics.”  The rapid rise and fall of these islands is described, with some attention given to a major slave revolt in São Tomé in 1595.  Toby Green discusses the production and export of rice and millet from Upper Guinea.  He points to the early importance of rice production, but does not enter into the debate about the origins of rice production in the Americas.

These studies provide several important messages for those concerned with African and American history.  As David Eltis estimates, there was considerable African production and trade in foodstuffs before, during, and after the slave trade.  Much of this production involved the use of slave labor, an institution generally accepted by African societies.  According to Robin Law the attempts by colonial powers to implement or encourage plantation agriculture in Africa using native labor, legally freed or enslaved, often were failures due to issues related to poor soil, high transport costs, and an inability to control labor.  Attempts to introduce, for export, the staples of American plantation agriculture – sugar, cotton, tobacco, indigo – were not successful, and when African cash crop agriculture expanded, the principal crops were cocoa, coffee, palm-oil, and groundnuts.  When cotton production took place in West Africa, it was inland, not in coastal areas (pp. 135-37).

It has become part of today’s conventional wisdom that the desire for “legitimate commerce” derived from the Abolitionists’ attempt to find a peaceful means of ending the slave trade, by encouragement of agricultural production on farms for sale in export markets.  This would provide a peaceful, market-motivated solution to the problem of ending the slave trade that need not interfere with the political and social order in African society.  In practice, however, legitimate commerce had some unexpected, perverse effects, since it made for an increased internal demand for slaves.  This increased internal demand offset, or at times even caused, the decline in the overseas slave trade (Gareth Austin, Bronwen Everill).

While the previous paragraphs suggest some general patterns in West Africa, the authors note differences among areas and over time.  Similarly the different colonial powers discussed – Portugal (by Roquinaldo Ferreira), England (by Law, Colleen E. Kriger, and Kehinde Olabimtan), the Netherlands (by Law), Denmark (by Per Hernaes) and Liberia (by Everill) – did not attain success, but the reasons for failure differed, as did the nature of the groups attempting these projects.  And while it is often claimed that the introduction of an argument for legitimate commerce was due to the emergence of a movement for abolition, the article by Christopher Leslie Brown traces its origins to an earlier period, primarily with the writings of the pamphleteer Malachy Postlethwayt after 1751, who argued that African commerce would be greater in the absence of the trade in slaves. These ideas were then picked up, in France by Abbè Pierre-Joseph-Andrè Roubard and in Denmark by Paul Isert.  Brown argues, further, that “the idea of legitimate commerce not only preceded the Abolitionist movement, but also was an important precondition for it” (p. 155), as it provided both important information about Africa and Africans as well as a possible alternative source of revenue instead of the slave trade.

These papers are all written by historians, and draw very heavily on primary research and a full knowledge of secondary sources.  They add much information about the African economies from the seventeenth to the nineteenth centuries, as well as provide new insights into the Atlantic Economy in these years.  This is an important collection of first-rate essays.

Stanley L. Engerman is the John H. Munro Professor of Economics and Professor of History at the University of Rochester.  He is the coauthor, with Kenneth L. Sokoloff, of Economic Development of the Americas since 1500: Institutions and Endowments (2012).

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (January 2014). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Agriculture, Natural Resources, and Extractive Industries
Servitude and Slavery
International and Domestic Trade and Relations
Geographic Area(s):Africa
Europe
Time Period(s):16th Century
17th Century
18th Century
19th Century

World Economic Performance: Past, Present and Future

Editor(s):Rao, D.S. Prasada
van Ark, Bart
Reviewer(s):Prados de la Escosura, Leandro

Published by EH.Net (January 2014)

D.S. Prasada Rao and Bart van Ark, editors, World Economic Performance: Past, Present and Future. Cheltenham, UK: Edward Elgar, 2013. ix + 432 pp. $160 (hardcover), ISBN: 978-1-84844-848-3.

Reviewed for EH.Net by Leandro Prados de la Escosura, Department of Social Sciences, Universidad Carlos III.

This volume commemorates the late Angus Maddison’s distinguished academic life and provides an idea of his wide range of intellectual interests and drive. As time goes by it becomes evident how Maddison’s approach changed the way economic historians address long-run growth. As late as the early 1980s, the study of industrialization (economic growth was not a fashionable term yet) mainly focused on sector analysis in which agricultural transformation played a major role while services were largely neglected. Maddison’s Phases of Capitalist Development (1982) led economic historians to investigate past experiences of growth with the tools used to analyze contemporary societies. As this approach required macroeconomic data, Maddison persuaded colleagues across the world to undertake the construction of historical national accounts. At the same time, in Groningen, he led a team researching international GDP comparisons. On this basis Maddison produced his widely used historical statistics of purchasing-power-adjusted GDP per head and stimulated scholars to pursue similar avenues of research as evidenced by the Maddison Project.

In World Economic Performance a group of Maddison’s friends and students contribute ten papers initially presented at conferences held to celebrate his eightieth birthday. An unpublished piece by Maddison himself on Chinese long-run performance, a two-part intellectual autobiography, an obituary, and an introduction complete this lengthy volume.

In a Maddisonian fashion, the volume encompasses the experiences of developed and developing countries. Thus, the BRIICs, that is, Brazil (as a part of Latin America), Russia, India, Indonesia, and China, along with OECD countries – the U.S., Western Europe, and Japan – are examined. Maddison’s augmented production function including proximate and ultimate causes of growth presides over the volume’s contributions that, nonetheless, focus on the post-1950 era and include a prognosis of their performance up to 2030.

In the case of the BRIICs, two phases are distinguished in their performance since 1950: an initial phase of government intervention and regulation with sluggish growth, and a later phase of liberalization and accelerated growth. Growth accounting shows that while factor accumulation prevailed in dirigiste and socialist experiments, capital deepening and efficiency improvements drove growth thereafter. Maddison and Justin Yifu Lin address poor performance and falling behind under Maoism in China (1948-1978) and accelerated growth after economic reforms were introduced. Lin associates China’s success in its on-going transition to a market economy to gradual reforms, in contrast with Russia’s “big bang” of privatization. He does not envisage, however, whether such a transition would have been feasible in a more democratic context, as it was the case of Gaidar’s Russia. In Maddison’s account, a much lower level of development, openness for international trade and capital, and the survival of the socialist state largely explain China’s success. Stanislav Menshikov adds an important dimension, the transformation of Russia from a heavy industry economy into a petrochemicals and metals exporter with dramatic consequences on income distribution. Deepak Lal’s contribution on India completes the picture with an assessment of Fabian socialist policies that led to an inward-looking industrialization strategy and of the gradual transition to market-friendly policies initiated in the 1990s in which agriculture and, especially, services made a significant contribution – while opening up is still incomplete. These four chapters provide a fascinating reflection on the implications of inward-looking policies for long run development and serve as a cautionary tale for those nostalgic for industrial policies and “state-led industrialization.”

As regards the developed world, the role of ICT in broad capital accumulation and multifactor productivity (MFP) is singled out as the crucial differential in long-run performance between the U.S. (Robert Gordon) and Western Europe (van Ark, Mary O’Mahony, and Marcel Timmer), that becomes especially germane in the case of services. Kyoji Fukao and Osamu Saito stress how exhausting catching up and declining working-age population signaled the end of Japan’s accelerated growth.

What will the future bring to emergent economies? Simple arithmetic projections based on factor endowments (human and physical capital per worker) and the technological gap suggest that differences in per capita income with the West will be reduced by 2030. However, the exhaustion of catching-up as a potential factor of deceleration in the BRIICs is only superficially addressed. The Japanese experience provides a warning to forecasters.  A glance to the convergence literature of the 1980s shows how misguided the predictions of Japan’s catching up to the U.S. were. Institutional obstacles that condition incentives to innovation are also largely neglected. How will China’s performance be affected by hard-to-avoid political reforms? Surprisingly, little attention is paid to geopolitical and strategic issues that may hamper the optimistic picture drawn by most contributors, with Ross Garnaut’s exception. The fact that economists have been unable to predict major developments over the last century challenges any prediction made on the basis of simple forward projections. China’s rivalry with India or Japan, the race to control natural resources – as opposed to its provision through international trade – and the consequences of global warming are threats to the linear progression of events that should be taken into account.

Gloomy predictions are made, in turn, for Western Europe and Japan. Improving efficiency in services appears to be a crucial but far from easy prerequisite for resuming fast growth and catching up to the U.S., since the small size of manufacturing precludes a significant catching up from increasing its MFP.

Although we should be grateful to the editors for gathering such a distinguish group of scholars, I would have liked them to go the extra mile and persuade contributors to homogenize their approach to growth accounting. Pierre van der Eng points out how crucial it is to have rigorous and homogeneous measures of factor inputs to derive comparable results. The reader would have also appreciated it if the findings from different case studies had been compared in the Introduction. Moreover, there is substantial room for improvement in the volume’s editing.

All in all, this extensive volume reads well, is thought provoking, and will stimulate further research. Angus would be certainly proud. I can only recommend you to rush to your library and get a copy before someone else does. You will not regret it!

Leandro Prados de la Escosura (leandro.prados.delaescosura@uc3m.es) is Professor of Economic History at Universidad Carlos III, Madrid, and CEPR Research Fellow. During the academic year 2013-2014 he is Leverhulme Visiting Professor at the LSE where he is researching economic freedom and wellbeing in the long run.

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (January 2014). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Economic Development, Growth, and Aggregate Productivity
Geographic Area(s):General, International, or Comparative
Time Period(s):20th Century: WWII and post-WWII

The Economics of the American Revolutionary War

Ben Baack, Ohio State University

By the time of the onset of the American Revolution, Britain had attained the status of a military and economic superpower. The thirteen American colonies were one part of a global empire generated by the British in a series of colonial wars beginning in the late seventeenth century and continuing on to the mid eighteenth century. The British military establishment increased relentlessly in size during this period as it engaged in the Nine Years War (1688-97), the War of Spanish Succession (1702-13), the War of Austrian Succession (1739-48), and the Seven Years War (1756-63). These wars brought considerable additions to the British Empire. In North America alone the British victory in the Seven Years War resulted in France ceding to Britain all of its territory east of the Mississippi River as well as all of Canada and Spain surrendering its claim to Florida (Nester, 2000).

Given the sheer magnitude of the British military and its empire, the actions taken by the American colonists for independence have long fascinated scholars. Why did the colonists want independence? How were they able to achieve a victory over what was at the time the world’s preeminent military power? What were the consequences of achieving independence? These and many other questions have engaged the attention of economic, legal, military, political, and social historians. In this brief essay we will focus only on the economics of the Revolutionary War.

Economic Causes of the Revolutionary War

Prior to the conclusion of the Seven Years War there was little, if any, reason to believe that one day the American colonies would undertake a revolution in an effort to create an independent nation-state. As apart of the empire the colonies were protected from foreign invasion by the British military. In return, the colonists paid relatively few taxes and could engage in domestic economic activity without much interference from the British government. For the most part the colonists were only asked to adhere to regulations concerning foreign trade. In a series of acts passed by Parliament during the seventeenth century the Navigation Acts required that all trade within the empire be conducted on ships which were constructed, owned and largely manned by British citizens. Certain enumerated goods whether exported or imported by the colonies had to be shipped through England regardless of the final port of destination.

Western Land Policies

The movement for independence arose in the colonies following a series of critical decisions made by the British government after the end of the war with France in 1763. Two themes emerge from what was to be a fundamental change in British economic policy toward the American colonies. The first involved western land. With the acquisition from the French of the territory between the Allegheny Mountains and the Mississippi River the British decided to isolate the area from the rest of the colonies. Under the terms of the Proclamation of 1763 and the Quebec Act of 1774 colonists were not allowed to settle here or trade with the Indians without the permission of the British government. These actions nullified the claims to land in the area by a host of American colonies, individuals, and land companies. The essence of the policy was to maintain British control of the fur trade in the West by restricting settlement by the Americans.

Tax Policies

The second fundamental change involved taxation. The British victory over the French had come at a high price. Domestic taxes had been raised substantially during the war and total government debt had increased nearly twofold (Brewer, 1989). Furthermore, the British had decided in1763 to place a standing army of 10,000 men in North America. The bulk of these forces were stationed in newly acquired territory to enforce its new land policy in the West. Forts were to be built which would become the new centers of trade with the Indians. The British decided that the Americans should share the costs of the military buildup in the colonies. The reason seemed obvious. Taxes were significantly higher in Britain than in the colonies. One estimate suggests the per capita tax burden in the colonies ranged from two to four per cent of that in Britain (Palmer, 1959). It was time in the British view that the Americans began to pay a larger share of the expenses of the empire.

Accordingly, a series of tax acts were passed by Parliament the revenue from which was to be used to help pay for the standing army in America. The first was the Sugar Act of 1764. Proposed by England’s Prime Minister the act lowered tariff rates on non-British products from the West Indies as well as strengthened their collection. It was hoped this would reduce the incentive for smuggling and thereby increase tariff revenue (Bullion, 1982). The following year Parliament passed the Stamp Act that imposed a tax commonly used in England. It required stamps for a broad range of legal documents as well as newspapers and pamphlets. While the colonial stamp duties were less than those in England they were expected to generate enough revenue to finance a substantial portion of the cost the new standing army. The same year passage of the Quartering Act imposed essentially a tax in kind by requiring the colonists to provide British military units with housing, provisions, and transportation. In 1767 the Townshend Acts imposed tariffs upon a variety of imported goods and established a Board of Customs Commissioners in the colonies to collect the revenue.

Boycotts

American opposition to these acts was expressed initially in a variety of peaceful forms. While they did not have representation in Parliament, the colonists did attempt to exert some influence in it through petition and lobbying. However, it was the economic boycott that became by far the most effective means of altering the new British economic policies. In 1765 representatives from nine colonies met at the Stamp Act Congress in New York and organized a boycott of imported English goods. The boycott was so successful in reducing trade that English merchants lobbied Parliament for the repeal of the new taxes. Parliament soon responded to the political pressure. During 1766 it repealed both the Stamp and Sugar Acts (Johnson, 1997). In response to the Townshend Acts of 1767 a second major boycott started in 1768 in Boston and New York and subsequently spread to other cities leading Parliament in 1770 to repeal all of the Townshend duties except the one on tea. In addition, Parliament decided at the same time not to renew the Quartering Act.

With these actions taken by Parliament the Americans appeared to have successfully overturned the new British post war tax agenda. However, Parliament had not given up what it believed to be its right to tax the colonies. On the same day it repealed the Stamp Act, Parliament passed the Declaratory Act stating the British government had the full power and authority to make laws governing the colonies in all cases whatsoever including taxation. Policies not principles had been overturned.

The Tea Act

Three years after the repeal of the Townshend duties British policy was once again to emerge as an issue in the colonies. This time the American reaction was not peaceful. It all started when Parliament for the first time granted an exemption from the Navigation Acts. In an effort to assist the financially troubled British East India Company Parliament passed the Tea Act of 1773, which allowed the company to ship tea directly to America. The grant of a major trading advantage to an already powerful competitor meant a potential financial loss for American importers and smugglers of tea. In December a small group of colonists responded by boarding three British ships in the Boston harbor and throwing overboard several hundred chests of tea owned by the East India Company (Labaree, 1964). Stunned by the events in Boston, Parliament decided not to cave in to the colonists as it had before. In rapid order it passed the Boston Port Act, the Massachusetts Government Act, the Justice Act, and the Quartering Act. Among other things these so-called Coercive or Intolerable Acts closed the port of Boston, altered the charter of Massachusetts, and reintroduced the demand for colonial quartering of British troops. Once done Parliament then went on to pass the Quebec Act as a continuation of its policy of restricting the settlement of the West.

The First Continental Congress

Many Americans viewed all of this as a blatant abuse of power by the British government. Once again a call went out for a colonial congress to sort out a response. On September 5, 1774 delegates appointed by the colonies met in Philadelphia for the First Continental Congress. Drawing upon the successful manner in which previous acts had been overturned the first thing Congress did was to organize a comprehensive embargo of trade with Britain. It then conveyed to the British government a list of grievances that demanded the repeal of thirteen acts of Parliament. All of the acts listed had been passed after 1763 as the delegates had agreed not to question British policies made prior to the conclusion of the Seven Years War. Despite all the problems it had created, the Tea Act was not on the list. The reason for this was that Congress decided not to protest British regulation of colonial trade under the Navigation Acts. In short, the delegates were saying to Parliament take us back to 1763 and all will be well.

The Second Continental Congress

What happened then was a sequence of events that led to a significant increase in the degree of American resistance to British polices. Before the Congress adjourned in October the delegates voted to meet again in May of 1775 if Parliament did not meet their demands. Confronted by the extent of the American demands the British government decided it was time to impose a military solution to the crisis. Boston was occupied by British troops. In April a military confrontation occurred at Lexington and Concord. Within a month the Second Continental Congress was convened. Here the delegates decided to fundamentally change the nature of their resistance to British policies. Congress authorized a continental army and undertook the purchase of arms and munitions. To pay for all of this it established a continental currency. With previous political efforts by the First Continental Congress to form an alliance with Canada having failed, the Second Continental Congress took the extraordinary step of instructing its new army to invade Canada. In effect, these actions taken were those of an emerging nation-state. In October as American forces closed in on Quebec the King of England in a speech to Parliament declared that the colonists having formed their own government were now fighting for their independence. It was to be only a matter of months before Congress formally declared it.

Economic Incentives for Pursuing Independence: Taxation

Given the nature of British colonial policies, scholars have long sought to evaluate the economic incentives the Americans had in pursuing independence. In this effort economic historians initially focused on the period following the Seven Years War up to the Revolution. It turned out that making a case for the avoidance of British taxes as a major incentive for independence proved difficult. The reason was that many of the taxes imposed were later repealed. The actual level of taxation appeared to be relatively modest. After all, the Americans soon after adopting the Constitution taxed themselves at far higher rates than the British had prior to the Revolution (Perkins, 1988). Rather it seemed the incentive for independence might have been the avoidance of the British regulation of colonial trade. Unlike some of the new British taxes, the Navigation Acts had remained intact throughout this period.

The Burden of the Navigation Acts

One early attempt to quantify the economic effects of the Navigation Acts was by Thomas (1965). Building upon the previous work of Harper (1942), Thomas employed a counterfactual analysis to assess what would have happened to the American economy in the absence of the Navigation Acts. To do this he compared American trade under the Acts with that which would have occurred had America been independent following the Seven Years War. Thomas then estimated the loss of both consumer and produce surplus to the colonies as a result of shipping enumerated goods indirectly through England. These burdens were partially offset by his estimated value of the benefits of British protection and various bounties paid to the colonies. The outcome of his analysis was that the Navigation Acts imposed a net burden of less than one percent of colonial per capita income. From this he concluded the Acts were an unlikely cause of the Revolution. A long series of subsequent works questioned various parts of his analysis but not his general conclusion (Walton, 1971). The work of Thomas also appeared to be consistent with the observation that the First Continental Congress had not demanded in its list of grievances the repeal of either the Navigation Acts or the Sugar Act.

American Expectations about Future British Policy

Did this mean then that the Americans had few if any economic incentives for independence? Upon further consideration economic historians realized that perhaps more important to the colonists were not the past and present burdens but rather the expected future burdens of continued membership in the British Empire. The Declaratory Act made it clear the British government had not given up what it viewed as its right to tax the colonists. This was despite the fact that up to 1775 the Americans had employed a variety of protest measures including lobbying, petitions, boycotts, and violence. The confluence of not having representation in Parliament while confronting an aggressive new British tax policy designed to raise their relatively low taxes may have made it reasonable for the Americans to expect a substantial increase in the level of taxation in the future (Gunderson, 1976, Reid, 1978). Furthermore a recent study has argued that in 1776 not only did the future burdens of the Navigation Acts clearly exceed those of the past, but a substantial portion would have borne by those who played a major role in the Revolution (Sawers, 1992). Seen in this light the economic incentive for independence would have been avoiding the potential future costs of remaining in the British Empire.

The Americans Undertake a Revolution

1776-77

British Military Advantages

The American colonies had both strengths and weaknesses in terms of undertaking a revolution. The colonial population of well over two million was nearly one third of that in Britain (McCusker and Menard, 1985). The growth in the colonial economy had generated a remarkably high level of per capita wealth and income (Jones, 1980). Yet the hurdles confronting the Americans in achieving independence were indeed formidable. The British military had an array of advantages. With virtual control of the Atlantic its navy could attack anywhere along the American coast at will and would have borne logistical support for the army without much interference. A large core of experienced officers commanded a highly disciplined and well-drilled army in the large-unit tactics of eighteenth century European warfare. By these measures the American military would have great difficulty in defeating the British. Its navy was small. The Continental Army had relatively few officers proficient in large-unit military tactics. Lacking both the numbers and the discipline of its adversary the American army was unlikely to be able to meet the British army on equal terms on the battlefield (Higginbotham, 1977).

British Financial Advantages

In addition, the British were in a better position than the Americans to finance a war. A tax system was in place that had provided substantial revenue during previous colonial wars. Also for a variety of reasons the government had acquired an exceptional capacity to generate debt to fund wartime expenses (North and Weingast, 1989). For the Continental Congress the situation was much different. After declaring independence Congress had set about defining the institutional relationship between it and the former colonies. The powers granted to Congress were established under the Articles of Confederation. Reflecting the political environment neither the power to tax nor the power to regulate commerce was given to Congress. Having no tax system to generate revenue also made it very difficult to borrow money. According to the Articles the states were to make voluntary payments to Congress for its war efforts. This precarious revenue system was to hamper funding by Congress throughout the war (Baack, 2001).

Military and Financial Factors Determine Strategy

It was within these military and financial constraints that the war strategies by the British and the Americans were developed. In terms of military strategies both of the contestants realized that America was simply too large for the British army to occupy all of the cities and countryside. This being the case the British decided initially that they would try to impose a naval blockade and capture major American seaports. Having already occupied Boston, the British during 1776 and 1777 took New York, Newport, and Philadelphia. With plenty of room to maneuver his forces and unable to match those of the British, George Washington chose to engage in a war of attrition. The purpose was twofold. First, by not engaging in an all out offensive Washington reduced the probability of losing his army. Second, over time the British might tire of the war.

Saratoga

Frustrated without a conclusive victory, the British altered their strategy. During 1777 a plan was devised to cut off New England from the rest of the colonies, contain the Continental Army, and then defeat it. An army was assembled in Canada under the command of General Burgoyne and then sent to and down along the Hudson River. It was to link up with an army sent from New York City. Unfortunately for the British the plan totally unraveled as in October Burgoyne’s army was defeated at the battle of Saratoga and forced to surrender (Ketchum, 1997).

The American Financial Situation Deteriorates

With the victory at Saratoga the military side of the war had improved considerably for the Americans. However, the financial situation was seriously deteriorating. The states to this point had made no voluntary payments to Congress. At the same time the continental currency had to compete with a variety of other currencies for resources. The states were issuing their own individual currencies to help finance expenditures. Moreover the British in an effort to destroy the funding system of the Continental Congress had undertaken a covert program of counterfeiting the Continental dollar. These dollars were printed and then distributed throughout the former colonies by the British army and agents loyal to the Crown (Newman, 1957). Altogether this expansion of the nominal money supply in the colonies led to a rapid depreciation of the Continental dollar (Calomiris, 1988, Michener, 1988). Furthermore, inflation may have been enhanced by any negative impact upon output resulting from the disruption of markets along with the destruction of property and loss of able-bodied men (Buel, 1998). By the end of 1777 inflation had reduced the specie value of the Continental to about twenty percent of what it had been when originally issued. This rapid decline in value was becoming a serious problem for Congress in that up to this point almost ninety percent of its revenue had been generated from currency emissions.

1778-83

British Invasion of the South

The British defeat at Saratoga had a profound impact upon the nature of the war. The French government still upset by their defeat by the British in the Seven Years War and encouraged by the American victory signed a treaty of alliance with the Continental Congress in early 1778. Fearing a new war with France the British government sent a commission to negotiate a peace treaty with the Americans. The commission offered to repeal all of the legislation applying to the colonies passed since 1763. Congress rejected the offer. The British response was to give up its efforts to suppress the rebellion in the North and in turn organize an invasion of the South. The new southern campaign began with the taking of the port of Savannah in December. Pursuing their southern strategy the British won major victories at Charleston and Camden during the spring and summer of 1780.

Worsening Inflation and Financial Problems

As the American military situation deteriorated in the South so did the financial circumstances of the Continental Congress. Inflation continued as Congress and the states dramatically increased the rate of issuance of their currencies. At the same time the British continued to pursue their policy of counterfeiting the Continental dollar. In order to deal with inflation some states organized conventions for the purpose of establishing wage and price controls (Rockoff, 1984). With its currency rapidly depreciating in value Congress increasingly relied on funds from other sources such as state requisitions, domestic loans, and French loans of specie. As a last resort Congress authorized the army to confiscate property.

Yorktown

Fortunately for the Americans the British military effort collapsed before the funding system of Congress. In a combined effort during the fall of 1781 French and American forces trapped the British southern army under the command of Cornwallis at Yorktown, Virginia. Under siege by superior forces the British army surrendered on October 19. The British government had now suffered not only the defeat of its northern strategy at Saratoga but also the defeat of its southern campaign at Yorktown. Following Yorktown, Britain suspended its offensive military operations against the Americans. The war was over. All that remained was the political maneuvering over the terms for peace.

The Treaty of Paris

The Revolutionary War officially concluded with the signing of the Treaty of Paris in 1783. Under the terms of the treaty the United States was granted independence and British troops were to evacuate all American territory. While commonly viewed by historians through the lens of political science, the Treaty of Paris was indeed a momentous economic achievement by the United States. The British ceded to the Americans all of the land east of the Mississippi River which they had taken from the French during the Seven Years War. The West was now available for settlement. To the extent the Revolutionary War had been undertaken by the Americans to avoid the costs of continued membership in the British Empire, the goal had been achieved. As an independent nation the United States was no longer subject to the regulations of the Navigation Acts. There was no longer to be any economic burden from British taxation.

THE FORMATION OF A NATIONAL GOVERNMENT

When you start a revolution you have to be prepared for the possibility you might win. This means being prepared to form a new government. When the Americans declared independence their experience of governing at a national level was indeed limited. In 1765 delegates from various colonies had met for about eighteen days at the Stamp Act Congress in New York to sort out a colonial response to the new stamp duties. Nearly a decade passed before delegates from colonies once again got together to discuss a colonial response to British policies. This time the discussions lasted seven weeks at the First Continental Congress in Philadelphia during the fall of 1774. The primary action taken at both meetings was an agreement to boycott trade with England. After having been in session only a month, delegates at the Second Continental Congress for the first time began to undertake actions usually associated with a national government. However, when the colonies were declared to be free and independent states Congress had yet to define its institutional relationship with the states.

The Articles of Confederation

Following the Declaration of Independence, Congress turned to deciding the political and economic powers it would be given as well as those granted to the states. After more than a year of debate among the delegates the allocation of powers was articulated in the Articles of Confederation. Only Congress would have the authority to declare war and conduct foreign affairs. It was not given the power to tax or regulate commerce. The expenses of Congress were to be made from a common treasury with funds supplied by the states. This revenue was to be generated from exercising the power granted to the states to determine their own internal taxes. It was not until November of 1777 that Congress approved the final draft of the Articles. It took over three years for the states to ratify the Articles. The primary reason for the delay was a dispute over control of land in the West as some states had claims while others did not. Those states with claims eventually agreed to cede them to Congress. The Articles were then ratified and put into effect on March 1, 1781. This was just a few months before the American victory at Yorktown. The process of institutional development had proved so difficult that the Americans fought almost the entire Revolutionary War with a government not sanctioned by the states.

Difficulties in the 1780s

The new national government that emerged from the Revolution confronted a host of issues during the 1780s. The first major one to be addressed by Congress was what to do with all of the land acquired in the West. Starting in 1784 Congress passed a series of land ordinances that provided for land surveys, sales of land to individuals, and the institutional foundation for the creation of new states. These ordinances opened the West for settlement. While this was a major accomplishment by Congress, other issues remained unresolved. Having repudiated its own currency and no power of taxation, Congress did not have an independent source of revenue to pay off its domestic and foreign debts incurred during the war. Since the Continental Army had been demobilized no protection was being provided for settlers in the West or against foreign invasion. Domestic trade was being increasingly disrupted during the 1780s as more states began to impose tariffs on goods from other states. Unable to resolve these and other issues Congress endorsed a proposed plan to hold a convention to meet in Philadelphia in May of 1787 to revise the Articles of Confederation.

Rather than amend the Articles, the delegates to the convention voted to replace them entirely with a new form of national government under the Constitution. There are of course many ways to assess the significance of this truly remarkable achievement. One is to view the Constitution as an economic document. Among other things the Constitution specifically addressed many of the economic problems that confronted Congress during and after the Revolutionary War. Drawing upon lessons learned in financing the war, no state under the Constitution would be allowed to coin money or issue bills of credit. Only the national government could coin money and regulate its value. Punishment was to be provided for counterfeiting. The problems associated with the states contributing to a common treasury under the Articles were overcome by giving the national government the coercive power of taxation. Part of the revenue was to be used to pay for the common defense of the United States. No longer would states be allowed to impose tariffs as they had done during the 1780s. The national government was now given the power to regulate both foreign and interstate commerce. As a result the nation was to become a common market. There is a general consensus among economic historians today that the economic significance of the ratification of the Constitution was to lay the institutional foundation for long run growth. From the point of view of the former colonists, however, it meant they had succeeded in transferring the power to tax and regulate commerce from Parliament to the new national government of the United States.

TABLES
Table 1 Continental Dollar Emissions (1775-1779)

Year of Emission Nominal Dollars Emitted (000) Annual Emission As Share of Total Nominal Stock Emitted Specie Value of Annual Emission (000) Annual Emission As Share of Total Specie Value Emitted
1775 $6,000 3% $6,000 15%
1776 19,000 8 15,330 37
1777 13,000 5 4,040 10
1778 63,000 26 10,380 25
1779 140,500 58 5,270 13
Total $241,500 100% $41,020 100%

Source: Bullock (1895), 135.
Table 2 Currency Emissions by the States (1775-1781)

Year of Emission Nominal Dollars Emitted (000) Year of Emission Nominal Dollars Emitted (000)
1775 $4,740 1778 $9,118
1776 13,328 1779 17,613
1777 9,573 1780 66,813
1781 123.376
Total $27,641 Total $216,376

Source: Robinson (1969), 327-28.

References

Baack, Ben. “Forging a Nation State: The Continental Congress and the Financing of the War of American Independence.” Economic History Review 54, no.4 (2001): 639-56.

Brewer, John. The Sinews of Power: War, Money and the English State, 1688- 1783. London: Cambridge University Press, 1989.

Buel, Richard. In Irons: Britain’s Naval Supremacy and the American Revolutionary Economy. New Haven: Yale University Press, 1998.

Bullion, John L. A Great and Necessary Measure: George Grenville and the Genesis of the Stamp Act, 1763-1765. Columbia: University of Missouri Press, 1982.

Bullock, Charles J. “The Finances of the United States from 1775 to 1789, with Especial Reference to the Budget.” Bulletin of the University of Wisconsin 1 no. 2 (1895): 117-273.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental.” Journal of Economic History 48 no. 1 (1988): 47-68.

Egnal, Mark. A Mighty Empire: The Origins of the American Revolution. Ithaca: Cornell University Press, 1988.

Ferguson, E. James. The Power of the Purse: A History of American Public Finance, 1776-1790. Chapel Hill: University of North Carolina Press, 1961.

Gunderson, Gerald. A New Economic History of America. New York: McGraw- Hill, 1976.

Harper, Lawrence A. “Mercantilism and the American Revolution.” Canadian Historical Review 23 (1942): 1-15.

Higginbotham, Don. The War of American Independence: Military Attitudes, Policies, and Practice, 1763-1789. Bloomington: Indiana University Press, 1977.

Jensen, Merrill, editor. English Historical Documents: American Colonial Documents to 1776 New York: Oxford university Press, 1969.

Johnson, Allen S. A Prologue to Revolution: The Political Career of George Grenville (1712-1770). New York: University Press, 1997.

Jones, Alice H. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia University Press, 1980.

Ketchum, Richard M. Saratoga: Turning Point of America’s Revolutionary War. New York: Henry Holt and Company, 1997.

Labaree, Benjamin Woods. The Boston Tea Party. New York: Oxford University Press, 1964.

Mackesy, Piers. The War for America, 1775-1783. Cambridge: Harvard University Press, 1964.

McCusker, John J. and Russell R. Menard. The Economy of British America, 1607- 1789. Chapel Hill: University of North Carolina Press, 1985.

Michener, Ron. “Backing Theories and the Currencies of Eighteenth-Century America: A Comment.” Journal of Economic History 48 no. 3 (1988): 682-692.

Nester, William R. The First Global War: Britain, France, and the Fate of North America, 1756-1775. Westport: Praeger, 2000.

Newman, E. P. “Counterfeit Continental Currency Goes to War.” The Numismatist 1 (January, 1957): 5-16.

North, Douglass C., and Barry R. Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49 No. 4 (1989): 803-32.

O’Shaughnessy, Andrew Jackson. An Empire Divided: The American Revolution and the British Caribbean. Philadelphia: University of Pennsylvania Press, 2000.

Palmer, R. R. The Age of Democratic Revolution: A Political History of Europe and America. Vol. 1. Princeton: Princeton University Press, 1959.

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1988.

Reid, Joseph D., Jr. “Economic Burden: Spark to the American Revolution?” Journal of Economic History 38, no. 1 (1978): 81-100.

Robinson, Edward F. “Continental Treasury Administration, 1775-1781: A Study in the Financial History of the American Revolution.” Ph.D. diss., University of Wisconsin, 1969.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge: Cambridge University Press, 1984.

Sawers, Larry. “The Navigation Acts Revisited.” Economic History Review 45, no. 2 (1992): 262-84.

Thomas, Robert P. “A Quantitative Approach to the Study of the Effects of British Imperial Policy on Colonial Welfare: Some Preliminary Findings.” Journal of Economic History 25, no. 4 (1965): 615-38.

Tucker, Robert W. and David C. Hendrickson. The Fall of the First British Empire: Origins of the War of American Independence. Baltimore: Johns Hopkins Press, 1982.

Walton, Gary M. “The New Economic History and the Burdens of the Navigation Acts.” Economic History Review 24, no. 4 (1971): 533-42.

Citation: Baack, Ben.  “The Economics of the American Revolutionary War.” EH.Net Encyclopedia, edited by Robert Whaples. October, 2001. URL https://eh.net/encyclopedia/the-economics-of-the-american-revolutionary-war-2/

 

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1

British and American Mine Safety, 1890 -1904

(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2

Comparative Safety of British and American Railroad Workers, 1889 – 1901

(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers

All causes
1.14 0.95 0.89
British trainmena

All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers

All causes
2.67 2.31 2.50
American trainmen

All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.

a. Guards, brakemen, and shunters.

b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3

Steel Industry fatality and Injury rates, 1910-1939

(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4

Work Injury Rates, Manufacturing and Coal Mining, 1926-1970

(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and Viscusi, Risk by Choice

An Economic History of Weather Forecasting

Erik D. Craft, University of Richmond

Introduction

The United States Congress established a national weather organization in 1870 when it instructed the Secretary of War to organize the collection of meteorological observations and forecasting of storms on the Great Lakes and Atlantic Seaboard. Large shipping losses on the Great Lakes during the 1868 and 1869 seasons, growing acknowledgement that storms generally traveled from the West to the East, a telegraphic network that extended west of the Great Lakes and the Atlantic Seaboard, and an eager Army officer promising military discipline are credited with convincing Congress that a storm-warning system was feasible. The United States Army Signal Service weather organization immediately dwarfed its European counterparts in budget and geographical size and shortly thereafter created storm warnings that on the Great Lakes alone led to savings in shipping losses that exceeded the entire network’s expenses.

Uses of Weather Information

Altering Immediate Behavior

The most obvious use of weather information is to change behavior in response to expected weather outcomes. The motivating force behind establishing weather organizations in England, France, Germany, and the United States was to provide warnings to ships of forthcoming storms, so that the ships might remain in harbor. But it soon became obvious that agricultural and commercial interests would benefit from weather forecasts as well. Farmers could protect fruit sensitive to freezes, and shippers could limit spoilage of produce while en route. Beyond preparation for severe weather, weather forecasts are now created for ever more specialized activities: implementing military operations, scheduling operation of power generation facilities, routing aircraft safely and efficiently, planning professional sports teams’ strategies, estimating demand for commodities sensitive to weather outcomes, planning construction projects, and optimizing the use of irrigation and reservoir systems’ resources.

Applying Climatological Knowledge

Climatological data can be used to match crop varieties, construction practices, and other activities appropriately to different regions. For example, in 1947 the British Government planned to grow groundnuts on 3.2 million acres in East and Central Africa. The groundnut was chosen because it was suited to the average growing conditions of the chosen regions. But due a lack of understanding of the variance in amount and timing of rainfall, the project was abandoned after five years and initial capital outlays of 24 million British pounds and annual operating costs of 7 million pounds. The preparation of ocean wind and weather charts in the 1850s by Matthew Fontaine Maury, Superintendent of the U.S. Navy’s Depot of Charts and Instruments, identified better routes for vessels sailing between America and Europe and from the United States East Cost to United States West Coast. The reduced sailing durations are alleged to have saved millions of dollars annually. Climatological data can also be used in modern environmental forecasts of air quality and how pollution is dispersed in the air. There are even forensic meteorologists who specialize in identifying weather conditions at a given point in time after accidents and subsequent litigation. Basic climatological information is also one reason why the United States cinema industry became established in Southern California; it was known that a high percentage of all days were sunny, so that outdoor filming would not be delayed.

Smoothing Consumption of Weather-Sensitive Commodities

An indirect use of weather forecasts and subsequent weather occurrences is their influence on the prices of commodities that are affected by weather outcomes. Knowledge that growing conditions will be poor or have been poor will lead to expectations of a smaller crop harvest. This causes expected prices of the crop to rise, thereby slowing consumption. This is socially efficient, since the present inventory and now smaller future harvest will have to be consumed more slowly over the time period up until the next season’s crop can be planted, cultivated, and harvested. Without an appropriate rise in price after bad weather outcomes, an excessive depletion of the crop’s inventory could result, leading to more variability in the consumption path of the commodity. People generally prefer consuming their income and individual products in relatively smooth streams, rather than in large amounts in some periods and small amounts in other periods. Both improved weather forecasts and United State Department of Agriculture crop forecasts help buyers more effectively consume a given quantity of a crop.

The History Weather Forecasts in the United States

An important economic history question is whether or not it was necessary for the United States Federal Government to found a weather forecasting organization. There are two challenges in answering that question: establishing that the weather information was socially valuable and determining if private organizations were incapable of providing the appropriate level of services. Restating the latter issue, did weather forecasts and the gathering of climatological information possess enough attributes of a public good such that private organizations would create an insufficiently large amount of socially- beneficial information? There are also two parts to this latter public good problem: nonexcludability and nonrivalry. Could private producers of weather information create a system whereby they earned enough money from users of weather information to cover the costs of creating the information? Would such a weather system be of the socially optimal size?

Potential Organizational Sources of Weather Forecasts

There were many organizations during the 1860s that the observer might imagine would benefit from the creation of weather forecasts. After the consolidation of most telegraphic service in the United States into Western Union in 1866, an organization with employees throughout the country existed. The Associated Press had a weather-reporting network, but there is no evidence that it considered supplementing its data with forecasts. One Ebenezer E. Merriam began supplying New York newspapers with predictions in 1856. Many years later, astronomer turned Army Signal Service forecaster Cleveland Abbe concluded that Merriam made his predictions using newspaper weather reports. The Chicago Board of Trade declined an invitation in 1869 to support a weather forecasting service based in Cincinnati. Neither ship-owners nor marine insurers appear to have expressed any interest in creating or buying weather information. Great Lakes marine insurers had even already overcome organizational problems by forming the Board of Lake Underwriters in 1855. For example, the group incurred expenses of over $11,000 in 1861 inspecting vessels and providing ratings on behalf of its members in the annual Lake Vessel Register. The Board of Lake Underwriters even had nine inspectors distributed on the Great Lakes to inspect wrecks on behalf of its members. Although there was evidence that storms generally traveled in a westerly direction, none of these groups apparently expected the benefits to itself to exceed the costs of establishing the network necessary to provide useful weather information.

Cleveland Abbe at the Cincinnati Observatory began the most serious attempt to establish a quasi-private meteorological organization in 1868 when he sought financial support from the Associated Press, Western Union, local newspapers, and the Cincinnati Chamber of Commerce. His initial plan included a system of one hundred reporting stations with the Associated Press covering the $100 instrument costs at half of the stations and the dispatch costs. In the following year, he widened his scope to include the Chicago Board of Trade and individual subscribers and proposed a more limited network of between sixteen and twenty-two stations. The Cincinnati Chamber of Commerce, whose president published the Cincinnati Commercial, funded the experiment from September through November of 1869. Abbe likely never had more than ten observers report on any given day and could not maintain more than about thirty local subscribers for his service, which provided at most only occasional forecasts. Abbe continued to receive assistance from Western Union in the collection and telegraphing of observations after the three-month trial, but he fell short in raising funds to allow the expansion of his network to support weather forecasts. His ongoing “Weather Bulletin of the Cincinnati Observatory” was not even published in the Cincinnati Commercial.

Founding of the Army Signal Service Weather Organization

Just as the three-month trial of Abbe’s weather bulletin concluded, Increase A. Lapham, a Milwaukee natural scientist, distributed his second list of Great Lakes shipping losses, entitled “Disaster on the Lakes.” The list included 1,164 vessel casualties, 321 deaths, and $3.1 million in property damaged in 1868 and 1,914 vessel casualties, 209 lives lost, and $4.1 million in financial losses in 1869. The number of ships that were totally destroyed was 105 and 126 in each year, respectively. According to a separate account, the storm of November 16-19, 1869 alone destroyed vessels whose value exceeded $420,000. Lapham’s list of losses included a petition to establish a weather forecasting service. In 1850, he had prepared a similar proposal alongside a list of shipping of losses, and twice during the 1850s he had tracked barometric lows across Wisconsin to provide evidence that storms could be forecast.

Recipients of Lapham’s petitions included the Wisconsin Academy of Sciences, the Chicago Academy of Sciences, the National Board of Trade meeting in Richmond, a new Chicago monthly business periodical entitled The Bureau, and Congressman Halbert E. Paine of Milwaukee. Paine had studied meteorological theories under Professor Elias Loomis at Western Reserve College and would introduce storm-warning service bills and eventually the final joint resolution in the House that gave the Army Signal Service storm-warning responsibilities. In his book Treatise on Meteorology (1868), Loomis claimed that the approach of storms to New York could be predicted reliably given telegraphic reports from several locations in the Mississippi Valley. From December 1869 through February 1870, Lapham’s efforts received wider attention. The Bureau featured nine pieces on meteorology from December until March, including at least two by Lapham.

Following the Civil War, the future of a signaling organization in the Army was uncertain. Having had budget requests for telegraph and signal equipment for years 1870 and 1871 cut in half to $5000, Colonel Albert J. Myer, Chief Signal Officer, led a small organization seeking a permanent existence. He visited Congressman Paine’s office in December of 1869 with maps showing proposed observation stations throughout the United Stations. Myer’s eagerness for the weather responsibilities, as well as the discipline of the Army organization and a network of military posts in the West, many linked via telegraph, would appear to have made the Army Signal Service a natural choice. The marginal costs of an Army weather organization using Signal Service personnel included only instruments and commercial telegraphy expenses. On February 4, 1870, Congress approved the Congressional Joint Resolution which “authorizes and requires the Secretary of War to provide for taking of meteorological observations . . . and for giving notice on the northern lakes and on the sea-coast of the approach and force of storms.” Five days later, President Grant signed the bill.

Expansion of the Army Signal Service’s Weather Bureau

Observer-sergeants in the Signal Service recorded their first synchronous observations November 1, 1870, 7:35 a.m. Washington time at twenty-four stations. The storm-warning system began formal operation October 23, 1871 with potential flag displays at eight ports on the Great Lakes and sixteen ports on the Atlantic seaboard. At that time, only fifty general observation stations existed. Already by June 1872, Congress expanded the Army Signal Service’s explicit forecast responsibilities via an appropriations act to most of the United States “for such stations, reports, and signal as may be found necessary for the benefit of agriculture and commercial interests.” In 1872, the Signal Service also began publication of the Weekly Weather Chronicle during the growing seasons. It disappeared in 1877, reemerging in 1887 as the Weather Crop Bulletin. As the fall of 1872 began, confidence in the utility of weather information was so high that 89 agricultural societies and 38 boards of trade and chambers of commerce had appointed meteorological committees to communicate with the Army Signal Service. In addition to dispensing general weather forecasts for regions of the country three times a day, the Signal Service soon sent special warnings to areas in danger of cold waves and frosts.

The original method of warning ships of dangerous winds was hoisting a single red flag with a black square located in the middle. This was known as a cautionary signal, and Army personnel at Signal Service observation stations or civilians at display stations would raise the flag on a pole “whenever the winds are expected to be as strong as twenty-five miles per hour, and to continue so for several hours, within a radius of one hundred miles from the station.” In the first year of operation ending 1 September 1872, 354 cautionary signals were flown on both the Great Lakes and the Atlantic Seaboard, approximately 70% of which were verified as having met the above definition. Such a measure of accuracy is incomplete, however, as it can always be raised artificially by not forecasting storms under marginal conditions, even though such a strategy might diminish the value of the service.

The United States and Canada shared current meteorological information beginning in 1871. By 1880, seventeen Canadian stations reported meteorological data to the United States at least twice daily by telegraph. The number of Army Signal Service stations providing telegraphic reports three times a day stabilized at 138 stations in 1880, dipped to 121 stations in 1883, and grew to approximately 149 stations by 1888. (See Table 1 for a summary of the growth of the Army Signal Service Meteorological Network from 1870 to 1890.) Additional display stations only provided storm warnings at sea and lake ports. River stations monitored water levels in order to forecast floods. Special cotton-region stations, beginning in 1883, comprised a dense network of daily reporters of rainfall and maximum and minimum temperatures. Total Army Signal Service expenditures grew from a $15,000 supplemental appropriation for weather operations in fiscal year 1870 to about one million dollars for all Signal Service costs around 1880 and stabilized at that level. Figure 1 shows the extent geographical extent of the Army Signal Service telegraphic observation network in 1881.

Figure 1: Army Signal Service Observation Network in 1881
Click on the image for the larger, more detailed image (~600K)Source: Map between pages 250-51, Annual Report of the Chief Signal Officer, October 1, 1881, Congressional Serial Set Volume 2015. See the detailed map between pages 304-05 for the location of each of the different types of stations listed in Table 1.

Table 1: Growth of the United States Army Signal Service Meteorological Network

Budget (Real 1880 Dollars)

Stations of the Second Order

Stations of the Third Order

Repair Stations

Display Stations

Special River Stations

Special Cotton-Region Stations

1870

32,487

25

1871

112,456

54

1872

220,269

65

1873

549,634

80

9

1874

649,431

92

20

1875

749,228

98

20

1876

849,025

106

38

23

1877

849,025

116

29

10

9

23

1878

978,085

136

36

12

11

23

1879

1,043,604

158

30

17

46

30

1880

1,109,123

173

39

49

50

29

1881

1,080,254

171

47

44

61

29

87

1882

937,077

169

45

3

74

30

127

1883

950,737

143

42

27

7

30

124

1884

1,014,898

138

68

7

63

40

138

1885

1,085,479

152

58

8

64

66

137

1886

1,150,673

146

33

11

66

69

135

1887

1,080,291

145

31

13

63

70

133

1888

1,063,639

149

30

24

68

78

116

1889

1,022,031

148

32

23

66

72

114

1890

994,629

144

34

15

73

72

114

Sources: Report of the Chief Signal Officer: 1888, p. 171; 1889, p. 136; 1890, p. 203 and “Provision of Value of Weather Information Services,” Craft (1995), p. 34.

Notes: The actual total budgets for years 1870 through 1881 are estimated. Stations of the second order recorded meteorological conditions three times per day. Most immediately telegraphed the data. Stations of the third order recorded observations at sunset. Repair stations maintained Army telegraph lines. Display stations displayed storm warnings on the Great Lakes and Atlantic seaboard. Special river stations monitored water levels in order to forecast floods. Special cotton-region stations collected high temperature, low temperature, and precipitation data from a denser network of observation locations

Early Value of Weather Information

Budget reductions in the Army Signal Service’s weather activities in 1883 led to the reduction of fall storm-warning broadcast locations on the Great Lakes from 80 in 1882 to 43 in 1883. This one-year drop in the availability of storm-warnings creates a special opportunity to measure the value of warnings of extremely high winds on the Great Lakes (see Figure 2). Many other factors can be expected to affect the value of shipping losses on the Great Lakes: the level of commerce in a given season, the amount of shipping tonnage available to haul a season’s commerce, the relative composition of the tonnage (steam versus sail), the severity of the weather, and long-term trends in technological change or safety. Using a statistical technique know as multiple regression, in which the effect of these many factors on shipping losses are analyzed concurrently, Craft (1998) argued that each extra storm-warning location on the Great Lakes lowered losses by about one percent. This implies that the storm-warning system reduced losses on the Great Lakes by approximately one million dollars annually in the mid 1870s and between $1 million and $4.5 million dollars per year by the early 1880s.

Source: The data are found in the following: Chicago Daily Inter Ocean (December 5, 1874 p. 2; December 18, 1875; December 27, 1876 p. 6; December 17, 1878; December 29, 1879 p. 6; February 3, 1881 p. 12; December 28, 1883 p. 3; December 5, 1885 p. 4); Marine Record (December 27, 1883 p. 5; December 25, 1884 pp. 4-5; December 24, 1885 pp. 4-5; December 30, 1886 p. 6; December 15, 1887 pp 4-5); Chief Signal Officer, Annual Report of the Chief Signal Officer, 1871- 1890.

Note: Series E 52 of the Historical Statistics of the United States (U.S. Bureau of the Census, 1975) was used to adjust all values to real 1880 dollars.

There are additional indirect methods with which to confirm the preceding estimate of the value of early weather information. If storm-warnings actually reduced the risk of damage to cargo and ships due to bad weather, then the cost of shipping cargo would be expected to decline. In particular, such reductions in shipping prices due to savings in losses caused by storms can be differentiated from other types of technological improvements by studying how fall shipping prices changed relative to summer shipping prices. It was during the fall that ships were particularly vulnerable to accidents caused by storms. Changes is shipping prices of grain from Chicago to Buffalo during the summers and falls from the late 1860s to late 1880s imply that storm-warnings were valuable and are consistent with the more direct method estimating reductions in shipping losses. Although marine insurance premia data for shipments on the Great Lakes are limited and difficult to interpret due the waning and waxing of the insurance cartel’s cohesion, such data are also supportive of the overall interpretation.

Given Army Signal Service budgets of about one million dollars for providing meteorological services to the entire United States, a reasonable minimum bound for the rate of return to the creation of weather information from 1870 to 1888 is 64 percent. The figure includes no social benefits from any weather information other than Great Lakes storm warnings. This estimate of nineteenth century information implies that the creation and distribution of storm warnings by the United States Federal Government were a socially beneficial investment.

Transfer of Weather Services to the Department of Agriculture

The Allison Commission hearings in 1884 and 1885 sought to determine the appropriate organization of Federal agencies whose activities included scientific research. The Allison Commission’s long report included testimony and discussion relating to the organization of the Army Signal Service, the United States Geological Survey, the Coast and Geodetic Survey, and the Navy Hydrographic Office. Weather forecasting required a reliable network of observers, some of whom were the sole Army personnel at a location. Advantages of a military organizational structure included a greater range of disciplinary responses, including court-martials for soldiers, for deficient job performance. Problems, however, of the military organization included the limited ability to increase one’s rank while working for the Signal Service and tension between the civilian and military personnel. In 1891, after an unsuccessful Congressional attempt at reform in 1887, the Weather Bureau became a civilian organization when it joined the young Department of Agriculture.

Aviation and World War I

Interest in upper air weather conditions grew rapidly after the turn of the century on account of two related events: the development of aviation and World War I. Safe use of aircraft depended on more precise knowledge of weather conditions (winds, storms, and visibility) between takeoff and landing locations. Not only were military aircraft introduced during World War I, but understanding wind conditions was also crucial to the use of poison gas on the front lines. In the most important change of the Weather Bureau’s organizational direction since transfer to the Department of Agricultural, Congress passed the Air Commerce Act in 1926, which by 1932 led to 38% of the Weather Bureau’s budget being directed toward aerology research and support.

Transfer of the Weather Bureau to the Department of Commerce

Even though aerological expenditures by the Weather Bureau in support of aviation rivaled funding for general weather services by the late 1930s, the Weather Bureau came under increasing criticism from aviation interests. The Weather Bureau was transferred to the Department of Commerce in 1940 where other support for aviation already originated. This transition mirrored the declining role of agriculture in the United States and movement toward a more urban economy. Subsequently known as the United States Weather Service, it has remained there since.

World War II

During World War II, weather forecasts assumed greater importance, as aircraft and rapid troop movements became key parts of military strategy. Accurate long-range artillery use also depended on knowledge of prevailing winds. For extensive use of weather forecasts and climatological information during wartime, consider Allied plans the strike German oil refineries in Ploesti, Romania. In the winter of 1943 military weather teams parachuted into the mountains of Yugoslavia to relay weather data. Bombers from North Africa could only reach the refineries in the absence of headwinds in either direction of the sortie. Cloud cover en route was important for protection, clear skies were helpful for identification of targets, and southerly winds permitted the bombers to drop their ordinance on the first pass on the south side of the area’s infrastructure, allowing the winds to assist in spreading the fire. Historical data indicated that only March or August were possible windows. Though many aircraft were lost, the August 1 raid was considered a success.

Tide, wind, and cloud conditions were also crucial in the planning of the invasion of Normandy (planned for June 5 and postponed until June 6 in 1944). The German High Command had been advised by its chief meteorologist that conditions were not opportune for an Allied invasion on the days following June 4. Dissention among American and British military forecasters nearly delayed the invasion further. Had it been deferred until the next date of favorable tide conditions, the invasion would have taken place during the worst June storm in twenty years in the English Channel.

Forecasting in Europe

A storm on November 14, 1854 destroyed the French warship Henri IV and damaged other British and French vessels on the Black Sea involved in the Crimean War. A report from the state-supported Paris Observatory indicated that barometric readings showed that the storm has passed across Europe in about four days. Urban Leverrier, director of the Paris Observatory, concluded that had there been a telegraph line between Vienna and the Crimea, the British and French fleets could have received warnings. Although the United States weather network was preceded by storm-warning systems in the Netherlands in 1860, Great Britain in 1861, and France in 1863, the new United States observation network immediately dwarfed the European organizations in both financial resources and geographical magnitude.

Robert FitzRoy, captain of the Beagle during Darwin’s famous voyage, was appointed director of the Meteorological Department established by the British Board of Trade (a government organization) in 1854. The wreck of the well-constructed iron vessel Royal Charter in a storm with much loss of life in October of 1859 provided another opportunity for a meteorological leader to argue that storms could be tracked and forecast. With support from the Prince Consort, FitzRoy and the Meteorological Department were granted approval to establish a storm-warning service. On February 6, 1861 the first warnings were issued. By August 1861 weather forecasts were issued regularly. By 1863, the Meteorological Department had a budget of three thousand English pounds. Criticism arose from different groups. Scientists wished to establish meteorology on a sound theoretical foundation and differentiate it from astrology. At the time, many publishers of weather almanacs subscribed to various theories of the influence of the moon or other celestial bodies on weather (This is not as outlandish one might suppose; in 1875, well-known economist William Stanley Jevons studied connections between sunspot activity and meteorology with business cycles). Some members of this second group supported the practice of forecasting but were critical of FitzRoy’s technique, perhaps hoping to become alternative sources of forecasts. Amidst the criticism, FitzRoy committed suicide in 1865. Forecasts and warnings were discontinued in 1866 until the warnings resumed two years later. General forecasts were suspended until 1877.

In 1862, Leverrier wrote the French Ministry of Public Education that French naval and commercial interests might be compromised by their dependence on warnings from the British Board of Trade. A storm-warning service in France commenced in July of 1863. Given the general movement of storms westward, neither France nor Britain had the luxury of tracking storms well before they arrived, as would have been possible with the November 1854 storm in the Crimea and as the Army Signal Service soon would be able to do in America. On account of administrative difficulties that were to hinder effective functioning of the service until 1877, French warnings ceased in October 1865 but resumed in May the next year. The French Central Bureau Meteorology was founded only in 1878 with a budget of only $12,000.

After the initiation of storm warning systems that preceded the Army Signal Service weather network, Europe would not achieve meteorological prominence again until the Bergen School of meteorology developed new storm analysis techniques after World War I, which incorporated cold and warm fronts. In the difficult days in Norway during the conclusion of the Great War, meteorological information from the rest of Europe was unavailable. Theoretical physicist turned meteorological researcher Wilhelm Bjerknes appealed to Norway’s national interests in defense, in the development of commercial aviation, and in increased agricultural output to build a dense observation network, whose data helped yield a new paradigm for meteorology.

Conclusion

The first weather forecasts in the United States that were based on a large network of simultaneous observations provided information to society that was much more valuable than the cost of production. There was discussion in the early winter of 1870 between the scientist Increase Lapham and a businessman in Chicago of the feasibility of establishing a private forecasting organization in Wisconsin or Illinois (see Craft 1999). But previous attempts by private organizations in the United States had been unsuccessful in supporting any private weather-forecasting service. In the contemporary United States, the Federal government both collects data and offers forecasts, while private weather organizations provide a variety of customized services.

Weather Forecasting Timeline

1743

Benjamin Franklin, using reports of numerous postmasters, determined the northeastward path of a hurricane from the West Indies.

1772-1777

Thomas Jefferson at Monticello, Virginia and James Madison at Williamsburg, Virginia collect a series of contemporaneous weather observations.

1814

Surgeon General Tilton issues an order directing Army surgeons to keep a diary of the weather in order to ascertain any influences of weather upon disease.

1817

Josiah Meigs, Commission of the General Land Office, requests officials at their land offices to record meteorological observations.

1846-1848

Matthew F. Maury, Superintendent of the U.S. Naval Observatory, publishes his first charts compiled from ships’ log showing efficient sailing routes.

1847

Barometer used to issue storm warnings in Barbadoes.

1848

J. Jones of New York advertises meteorological reports costing between twelve and one half and twenty-five cents per city per day. There is no evidence the service was ever sold.

1848

Publication in the British Daily News of the first telegraphic daily weather report.

1849

The Smithsonian Institution begins a nearly three decade long project of collecting meteorological data with the goal of understanding storms.

1849

Captain Joseph Brooks, manager of the Portland Steamship Line, receives telegraphic reports three times a day from Albany, New York, and Plattsburg in order to determine if the line’s ships should remain in port in Maine.

1853-1855

Ebenezer E. Merriam of New York, using newspaper telegraphic reports, offers weather forecasts in New York’s newspapers on an apparently irregular basis.

1858

The U.S. Army Engineers begin collecting meteorological observations while surveying the Great Lakes.

1860

Christoph Buys Ballot issues first storm warnings in the Netherlands.

1861

Admiral Robert FitzRoy of the British Meteorological Office begins issuing storm-warnings.

1863

Urban Leverrier, director of the Paris Observatory, organizes a storm-warning service.

1868

Cleveland Abbe of the Cincinnati Observatory unsuccessfully proposes a weather service of one hundred observation stations to be supported by the Cincinnati Chamber of Commerce, Associated Press, Western Union, and local newspapers.

1869

The Cincinnati Chamber of Commerce funds a three-month trial of the Cincinnati Observatory’s weather bulletin. The Chicago Board of Trade declines to participate.

1869

Increase A. Lapham publishes a list of the shipping losses on the Great Lakes during the 1868 and 1869 seasons.

1870

Congress passes a joint resolution directing the Secretary of War to establish a meteorological network for the creation of storm warnings on the Great Lakes and Atlantic Seaboard. Storm-warnings are offered on November 8. Forecasts begin the following February 19.

1872

Congressional appropriations bill extends Army Signal Service duties to provide forecasts for agricultural and commercial interests.

1880

Frost warnings offered for Louisiana sugar producers.

1881-1884

Army Signal Service expedition to Lady Franklin Bay in support of international polar weather research. Only seven of the twenty-five member team survives.

1881

Special cotton-region weather reporting network established.

1891

Weather Bureau transferred to the Department of Agriculture.

1902

Daily weather forecasts sent by radio to Cunard Line steamships.

1905

First wireless weather report from a ship at sea.

1918

Norway expands its meteorological network and organization leading to the development of new forecasting theories centered on three-dimensional interaction of cold and warm fronts.

1919

American Meteorological Society founded.

1926

Air Commerce Act gives the Weather Bureau responsibility for providing weather services to aviation.

1934

First private sector meteorologist hired by a utility company.

1940

The Weather Bureau is transferred from the Department of Agriculture to the Department of Commerce.

1946

First private weather forecast companies begin service.

1960

The first meteorological satellite, Tiros I, enters orbit successfully.

1976

The United States launches its first geostationary weather satellites.

References

Abbe, Cleveland, Jr. “A Chronological Outline of the History of Meteorology in the United States.” Monthly Weather Review 37, no. 3-6 (1909): 87-89, 146- 49, 178-80, 252-53.

Alter, J. Cecil. “National Weather Service Origins.” Bulletin of the Historical and Philosophical Society of Ohio 7, no. 3 (1949): 139-85.

Anderson, Katharine. “The Weather Prophets: Science and Reputation in Victorian Meteorology.” History of Science 37 (1999): 179-216.

Burton, Jim. “Robert Fitzroy and the Early History of the Meteorological Office.” British Journal for the History of Science 19 (1986): 147-76.

Chief Signal Officer. Report of the Chief Signal Officer. Washington: GPO, 1871-1890.

Craft, Erik. “The Provision and Value of Weather Information Services in the United States during the Founding Period of the Weather Bureau with Special Reference to Transportation on the Great Lakes.” Ph.D. diss., University of Chicago, 1995.

Craft, Erik. “The Value of Weather Information Services for Nineteenth-Century Great Lakes Shipping.” American Economic Review 88, no.5 (1998): 1059-1076.

Craft, Erik. “Private Weather Organizations and the Founding of the United States Weather Bureau.” Journal of Economic History 59, no. 4 (1999): 1063- 1071.

Davis, John L. “Weather Forecasting and the Development of Meteorological Theory at the Paris Observatory.” Annals of Science 41 (1984): 359-82.

Fleming, James Rodger. Meteorology in America, 1800-1870. Baltimore: Johns Hopkins University Press, 1990.

Fleming, James Rodger, and Roy E. Goodman, editors. International Bibliography of Meteorology. Upland, Pennsylvania: Diane Publishing Co., 1994.

Friedman, Robert Marc. Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Ithaca: Cornell University Press, 1989.

Hughes, Patrick. A Century of Weather Service. New York: Gordon and Breach, 1970.

Miller, Eric R. “The Evolution of Meteorological Institutions in United States.” Monthly Weather Review 59 (1931): 1-6.

Miller, Eric R. “New Light on the Beginnings of the Weather Bureau from the Papers of Increase A. Lapham.” Monthly Weather Review 59 (1931): 65-70.

Sah, Raaj. “Priorities of Developing Countries in Weather and Climate.” World Development 7 no. 3 (1979): 337-47.

Spiegler, David B. “A History of Private Sector Meteorology.” In Historical Essays on Meteorology, 1919-1995, edited by James Rodger Fleming, 417- 41. Boston: American Meteorological Society, 1996.

Weber, Gustavus A. The Weather Bureau: Its History, Activities and Organization. New York: D. Appleton and Company, 1922.

Whitnah, Donald R. A History of the United States Weather Bureau. Urbana: University of Illinois Press, 1961.

Citation: Craft, Erik. “Economic History of Weather Forecasting”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2001. URL http://eh.net/encyclopedia/an-economic-history-of-weather-forecasting/

The History of the International Tea Market, 1850-1945

Bishnupriya Gupta, University of Warwick

Demand for Tea

“Tea is better than wine for it leadeth not to intoxication, neither does it cause a man to say foolish things and repent there of in his sober moments. It is better than water for it does not carry disease; neither does it act like poison as water does when it contains foul and rotten matter.”

This ancient saying from China gained widespread acceptance in Europe during the course of the eighteenth century. Tea displaced beer in Britain and in the Netherlands. In the fight against alcoholism, the temperance movement of the nineteenth century recommended tea as an alternative. Evidence based on contemporary accounts suggests that a tradesman’s family in 1749 in Britain spent three shillings a week for bread and four shillings on tea and sugar. But tea was still too expensive to become common man’s drink. It was only in the nineteenth century that tea became a common beverage for British households. Per capita consumption per year increased from 1.1 pounds in 1820 to 5.9 pounds in 1900 and 9.6 pounds in 1931. By this time the British market had reached a saturation point. In the United States and continental Europe, advertising campaigns encouraging coffee drinkers to switch to tea had limited success (see Table 1).

Table 1: Consumption of Tea: International Market Share

Year Share in World Consumption (%)
United Kingdom Rest of Europe Russia/USSR North America
(including West Indies)
Major Producing Countries
1910 39.2 4.2 21.0 18.3 4.4
1920 56.4 6.9 Not Available 18.1 6.6
1928 48.4 6.7 7.1 14.3 4.1
1936 53.5 6.3 3.1 14.2 9.3

Source: International Tea Committee, Bulletin of Statistics, 1946.

A small proportion of a household’s total budget is spent on tea. At lower levels of income tea consumption responds to changes in income. The income elasticity of demand (i.e. the percentage change in consumption due to a one percent change in income) for tea in India in the 1950s was estimated to be 1.1. But at higher levels of income, the income elasticity of demand for tea tends to be low. For the UK in the interwar years, Richard Stone estimated the price elasticity of demand (i.e. the percentage change in consumption due to a one percent in price) to be -0.32, while the income elasticity was only 0.04. These figures suggest that the market in developed countries would not expand significantly with rising incomes. Furthermore, a decline in price would not have a large effect on the quantity demanded.

In the producer countries, which were less economically developed, the domestic market showed significant expansion from the 1930s onwards. The Indian market increased from 10 million pounds in 1905 to 18 million pounds in 1910, but was only a small proportion of the British consumption of 287 million pounds in 1910. However, there was little effort to expand the domestic market in India until the 1930s. India had a large population and the potential of a large market. As British demand stagnated large sums were spent on advertising campaigns in India. The industry set up demonstrations in tea-making and sold cups of tea at railway stations and local fairs. The Indian market in the 1920s had increased by 15 million pounds to 50 million pounds. Consumption doubled in the 1930s.

Supply of Tea

China had been the major supplier of tea to Britain. Tea was cultivated in small plots of land by peasant farmers, whose output proved inadequate to meet the surge in demand. The slow increase in production together with the political inwardness in China after 1840 led to the search for alternative production centers. Plantations appeared to be an attractive alternative. British experiments with the tea plant in south Asia were successful and led to the development of plantations in Eastern India and in Ceylon (Sir Lanka) from the middle of the nineteenth century. The tea companies attracted investment from Britain and were managed by British agents. By 1860, more than fifty companies were producing tea in Eastern India. Tea companies in India and Ceylon were registered either in London or in Calcutta and Colombo and run by British agents on the basis of long-term agency contracts. The British agents had local counterparts who were responsible for the day-to-day functioning. A typical managing agent owned shares in several companies and was responsible for their management. Consequently, despite the presence of a few hundred companies in India and Ceylon by the early twentieth century, decision making in the industry was in the hands of a few British agents. In 1879, over 70 percent of the teas sold in London were from China. By 1900 China’s share had dramatically declined to 10 percent and the black teas from India and Ceylon constituted the bulk of the market. Table 2 shows market share of the main exporting countries between 1928 and 1940.

Table 2: Production of Tea: International Market Share

Year Share in World Exports (%)
India Ceylon Java and Sumatra
1928 39.0 26.0 16.7
1936 37.1 25.8 18.1
1940 37.5 26.0 18.4

Source: International Tea Committee, Bulletin of Statistics, 1946.

There are two types of tea. India and Ceylon produced black tea. China produced both black tea and green tea. Both are produced from the same plant. Leaves are steamed and dried to produce green tea. Black tea undergoes fermentation and further oxidization. Tea prices were determined at auctions, London was an important center. Calcutta, Colombo and Amsterdam were the other main centers. Prices depend on the quality of tea. Regional differences in soil, climate and elevation account for differences in quality. The slopes of the Himalayas in and around Darjeeling and the highland areas on the island of Ceylon produce some of the finest teas in the world and command high prices. However, average tea prices depend on the supply costs of common teas. The tea crop is harvested all through the year in the tropical areas, Ceylon and Java. In Eastern India the onset of the winter brings an end to harvesting. The output of tea consists of leaves plucked from the tea bush. Fine plucking reduces quantity, but improves quality while coarse plucking increases output at the cost of quality. In the short run output can be varied by regulating plucking. Increase in output in the long run takes place through increase in cultivation. The tea plants take six to seven years to mature. When prices are high there is an incentive to pluck coarse to increase output in the short-run. This disproportionately increases the quantity of common teas leading to a sharp decline in the average price.

Fluctuations in Prices

In the first half of the twentieth century, the tea industry saw wide fluctuations in prices. During the First World War, the British government undertook purchases of tea to avoid a shortage in supply. This guaranteed a market for the producers. The boom in prices in the early 1920s encouraged an increase in acreage under tea not just in India and Ceylon, but also in Java and Sumatra, territories in the Dutch East Indies. In India, it encouraged planting and establishment of new plantations in the hills of the southern India. The increased acreage was followed by an increase in output with a lag of a few years. As in many other agricultural commodities the international market showed signs of excess supply towards the end of the 1920s and stocks accumulated. The collapse of tea prices in 1929 was not simply a result of decline in demand with the onset of the Depression in 1929; high supply had become a feature of the industry following the post war expansion in acreage.

Figure 1: Average Tea Prices

Source: International Tea Committee, Bulletin of Statistics, 1946.

The Tea Cartel

During this period price support schemes were put in place for several agricultural commodities by forming collusive agreements or cartels. As primary products have low price elasticities of demand, output restriction increases the profits of the producers and is in their collective interest. Early attempts at collusion in tea had not been successful, but as prices tumbled, the tea producers’ associations in the three major producing countries set up the International Tea Agreement in 1930. The Tea Associations in India, Ceylon and the Dutch East Indies agreed to reduce output to prevent a further fall in prices. This was a voluntary agreement, where each tea company belonging to the Tea Associations in the producer countries signed up to cut back output. There were many firms in the industry. However as the firms were managed by a few agents who made decisions about how much to produce, effective firm size was larger and increased the viability of a collusive agreement. Each producer in a cartel has an incentive to cheat and free ride on the compliance of other firms. But when firms face a threat that the agreement will be abandoned and prices will decline if participants do not comply, the agreement can be sustained. Economic theory predicts that collusion can be sustained by price wars — any sign of noncompliance such as falling prices, leads every firm to abandon the agreement and increase output bringing about a further fall in prices. Collusion can be sustained more easily in markets were output is produced by few firms.

The International Tea Agreement was abandoned in 1931 and 1932. When the figures were added up it emerged that the promised reduction by Java and Sumatra in the Dutch East Indies had not been made. Any reduction made by the European Estates had been counterbalanced by increased production on the part of the native producers. The agreement fell apart. The Tea Associations in India and Ceylon blamed Java and Sumatra for the failure to restrict output in accordance with the scheme of 1930. The conflict of interest between large producing firms and smaller producers in terms of what each can gain from the cartel prevented a continuation of the collusive arrangement. Producers in the country with the smallest market share were not keen to be a part of the arrangement. But India and Ceylon continued to negotiate for an agreement rather have a price war. Negotiations and bargaining were much more important in collusion in the tea market. Contrary to what theory suggests, there is no evidence of a price war. As prices declined further, producers in Java and Sumatra were more willing to be a part of such an agreement. A second International Tea Agreement was signed in 1933. All the participating countries faced a reduction in exports by 15 percent of the maximum attained in any of the years 1929-32. Export quotas were assigned to individual firms, but the quota could be traded. The agreement covered a period of five years. Legislation was adopted in the participating countries, which made the export quota legally binding and limited expansion in acreage up to a maximum of 0.5 percent per year. The International Tea Agreement of 1933 was a successful case cartelization. The agreement lasted right up to the Second World War when the conditions in the market changed. The agreement led to an immediate upward movement in prices (See Figure 1). As Table 3 shows, most firms in India and Ceylon reduced output in response to the agreement. There is no doubt that the success of the agreement depended on the legislation passed in the producing countries in 1933. There had been no legal backing in the case of the previous agreement. It froze the relative market share of the producers and prevented new firms from entering the tea market. The International Tea Committee appeared to have a clearly thought out strategy and seemed to act with considerable foresight. Export of tea seeds from the three participating countries was prohibited. It was only when Kenya, Uganda, Tanganyika and Nyasaland agreed to limit new planting that the export restrictions were eased.

The Economist commented in August 1933:

“Producers of commodities like wheat and sugar may envy the facility with which the tea growing industry obtained a 30 percent rise in average tea prices and a 90 percent enhancement of tea share values — all within the space of a little more than six months.”

Table 3
Compliance to the Tea Agreement
Percent of Firms in Region Reducing Output

India Ceylon
1930 Reduced Output 86% 76%
Reduced Output by 10% 56% 17%
1933 Reduced Output 89% 90%
Reduced Output by 15% 52% 51%

Note: The figure of 10 percent is used as the expected reduction. In 1930 the negotiated level varied between 15 percent and 3 percent depending on the quality of tea. In 1933 exports were to be reduced by 15 percent. Output reduction may be expected to be less as firms sell a share of the output in the domestic market.

Sources: Mincing Lane Tea & Rubber Brokers’ Association, A Guide to Investors and Investors’ India Year Books.

Further Readings:

Griffiths, Percival. The His­tory of the Indian Tea In­dustry. London: Weidenfeld and Nicolson, 1967

Gupta, Bishnupriya. “Collusion in the Indian Tea Industry in the Great Depression: An Analysis of Panel Data.” Explorations in Economic History 34, no. 2 (1997): 155-173.

Gupta, Bishnupriya. “The International Tea Cartel during the Great Depression, 1929-33.” Journal of Economic History 61, no.1 (2001): 144-159.

Macfarlane, Alan and Iris. Macfarlane, Green Gold: The Empire of Tea. London: Ebury Press, 2003.

Sarkar, Goutam. The World Tea Economy. Delhi: Oxford University Press, 1972.

Wickizer, Vernon D. Coffee, Tea and Cocoa: An Economic and Poli­ti­cal Analysis. Stanford: Stanford University Press, 1951.

Citation: Gupta, Bishnupriya. “The History of the International Tea Market, 1850-1945″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-the-international-tea-market-1850-1945/

The Economic History of Taiwan

Kelly Olds, National Taiwan University

Geography

Taiwan is a sub-tropical island, roughly 180 miles long, located less than 100 miles offshore of China’s Fujian province. Most of the island is covered with rugged mountains that rise to over 13,000 feet. These mountains rise directly out of the ocean along the eastern shore facing the Pacific so that this shore, and the central parts of the island are sparsely populated. Throughout its history, most of Taiwan’s people have lived on the Western Coastal Plain that faces China. This plain is crossed by east-west rivers, which occasionally bring floods of water down from the mountains creating broad boulder strewn flood plains. Until modern times, these rivers have made north-south travel costly and limited the island’s economic integration. The most important river is the Chuo Shuei-Hsi (between present-day Changhua and Yunlin counties), which has been an important economic and cultural divide.

Aboriginal Economy

Little is known about Taiwan prior to the seventeenth-century. When the Dutch came to the island in 1622, they found a population of roughly 70,000 Austronesian aborigines, at least 1,000 Chinese and a smaller number of Japanese. The aborigine women practiced subsistence agriculture while aborigine men harvested deer for export. The Chinese and Japanese population was primarily male and transient. Some of the Chinese were fishermen who congregated at the mouths of Taiwanese rivers but most Chinese and Japanese were merchants. Chinese merchants usually lived in aborigine villages and acted as middlemen, exporting deerskins, primarily to Japan, and importing salt and various manufactures. The harbor alongside which the Dutch built their first fort (in present-day Tainan City) was already an established place of rendezvous for Chinese and Japanese trade when the Dutch arrived.

Taiwan under the Dutch and Koxinga

The Dutch took control of most of Taiwan in a series of campaigns that lasted from the mid-1630s to the mid-1640s. The Dutch taxed the deerskin trade, hired aborigine men as soldiers and tried to introduce new forms of agriculture, but otherwise interfered little with the aborigine economy. The Tainan harbor grew in importance as an international entrepot. The most important change in the economy was an influx of about 35,000 Chinese to the island. These Chinese developed land, mainly in southern Taiwan, and specialized in growing rice and sugar. Sugar became Taiwan’s primary export. One of the most important Chinese investors in the Taiwanese economy was the leader of the Chinese community in Dutch Batavia (on Java) and during this period the Chinese economy on Taiwan bore a marked resemblance to the Batavian economy.

Koxinga, a Chinese-Japanese sea lord, drove the Dutch off the island in 1661. Under the rule of Koxinga and his heirs (1661-1683), Chinese settlement continued to spread in southern Taiwan. On the one hand, Chinese civilians made the crossing to flee the chaos that accompanied the Ming-Qing transition. On the other hand, Koxinga and his heirs brought over soldiers who were required to clear land and farm when they were not being used in wars. The Chinese population probably rose to about 120,000. Taiwan’s exports changed little, but the Tainan harbor lost importance as a center of international trade, as much of this trade now passed through Xiamen (Amoy), a port across the strait in Fujian that was also under the control of Koxinga and his heirs.

Taiwan under Qing Rule

The Qing dynasty defeated Koxinga’s grandson and took control of Taiwan in 1683. Taiwan remained part of the Chinese empire until it ceded the island to Japan in 1895. The Qing government originally saw control of Taiwan as an economic burden that had to be borne in order to keep the island out of the hand of pirates. In the first year of occupation, the Qing government shipped as many Chinese residents as possible back to the mainland. The island lost perhaps one-third of its Chinese population. Travel to Taiwan by all but male migrant workers was illegal until 1732 and this prohibition was reinstated off-and-on until it was finally permanently rescinded in 1788. However, the island’s Chinese population grew about two percent per year in the century following the Qing takeover. Both illegal immigration and natural increase were important components of this growth. The Qing government feared the expense of Chinese-aborigine confrontations and tried futilely to restrain Chinese settlement and keep the populations apart. Chinese pioneers, however, were constantly pushing the bounds of Chinese settlement northward and eastward and the aborigines were forced to adapt. Some groups permanently leased their land to Chinese settlers. Others learned Chinese farming skills and eventually assimilated or else moved toward the mountains where they continued hunting, learned to raise cattle or served as Qing soldiers. Due to the lack of Chinese women, intermarriage was also common.

Individual entrepreneurs or land companies usually organized Chinese pioneering enterprises. These people obtained land from aborigines or the government, recruited settlers, supplied loans to the settlers and sometimes invested in irrigation projects. Large land developers often lived in the village during the early years but moved to a city after the village was established. They remained responsible for paying the land tax and they received “large rents” from the settlers amounting to 10-15 percent of the expected harvest. However, they did not retain control of land usage or have any say in land sales or rental. The “large rents” were, in effect, a tax paid to a tax farmer who shared this revenue with the government. The payers of the large rents were the true owners who controlled the land. These people often chose to rent out their property to tenants who did the actual farming and paid a “small rent” of about 50 percent of the expected harvest.

Chinese pioneers made extensive use of written contracts but government enforcement of contracts was minimal. In the pioneers’ homeland across the strait, protecting property and enforcing agreements was usually a function of the lineage. Being part of a strong lineage was crucial to economic success and violent struggles among lineages were a problem endemic to south China. Taiwanese settlers had crossed the strait as individuals or in small groups and lacked strong lineages. Like other Chinese immigrants throughout the world, they created numerous voluntary associations based on one’s place of residence, occupation, place of origin, surname, etc. These organizations substituted for lineages in protecting property and enforcing contracts, and violent conflict among these associations over land and water rights was frequent. Due to property rights problems, land sales contracts often included the signature of not only the owner, but also his family and neighbors agreeing to the transfer. The difficulty of seizing collateral led to the common use of “conditional sales” as a means of borrowing money. Under the terms of a conditional sale, the lender immediately took control of the borrower’s property and retained the right to the property’s production in lieu of rent until the borrower paid back the loan. Since the borrower could wait an indefinite period of time before repaying the loan, this led to an awkward situation in which the person who controlled the land did not have permanent ownership and had no incentive to invest in land improvements.

Taiwan prospered during a sugar boom in the early eighteenth century, but afterwards its sugar industry had a difficult time keeping up with advances in foreign production. Until the Japanese occupation in 1895, Taiwan’s sugar farms and sugar mills remained small-scale operations. The sugar industry was centered in the south of the island and throughout the nineteenth century, the southern population showed little growth and may have declined. By the end of the nineteenth century, the south of the island was poorer than the north of the island and its population was shorter in stature and had a lower life expectancy. The north of the island was better suited to rice production and the northern economy seems to have grown robustly. As the Chinese population moved into the foothills of the northern mountains in the mid-nineteenth century, they began growing tea, which added to the north’s economic vitality and became the island’s leading export during the last quarter of the nineteenth century. The tea industry’s most successful product was oolong tea produced primarily for the U.S. market.

During the last years of the Qing dynasty’s rule in Taiwan, Taiwan was made a full province of China and some attempts were made to modernize the island by carrying out a land survey and building infrastructure. Taiwan’s first railroad was constructed linking several cities in the north.

Taiwan under Japanese Rule

The Japanese gained control of Taiwan in 1895 after the Sino-Japanese War. After several years of suppressing both Chinese resistance and banditry, the Japanese began to modernize the island’s economy. A railroad was constructed running the length of the island and modern roads and bridges were built. A modern land survey was carried out. Large rents were eliminated and those receiving these rents were compensated with bonds. Ownership of approximately twenty percent of the land could not be established to Japanese satisfaction and was confiscated. Much of this land was given to Japanese conglomerates that wanted land for sugarcane. Several banks were established and reorganized irrigation districts began borrowing money to make improvements. Since many Japanese soldiers had died of disease, improving the island’s sanitation and disease environment was also a top priority.

Under the Japanese, Taiwan remained an agricultural economy. Although sugarcane continued to be grown mainly on family farms, sugar processing was modernized and sugar once again became Taiwan’s leading export. During the early years of modernization, native Taiwanese sugar refiners remained important but, largely due to government policy, Japanese refiners holding regional monopsony power came to control the industry. Taiwanese sugar remained uncompetitive on the international market, but was sold duty free within the protected Japanese market. Rice, also bound for the protected Japanese market, displaced tea to become the second major export crop. Altogether, almost half of Taiwan’s agricultural production was being exported in the 1930s. After 1935, the government began encouraging investment in non-agricultural industry on the island. The war that followed was a time of destruction and economic collapse.

Growth in Taiwan’s per-capita economic product during this colonial period roughly kept up with that of Japan. Population also grew quickly as health improved and death rates fell. The native Taiwanese population’s per-capita consumption grew about one percent per year, slower than the growth in consumption in Japan, but greater than the growth in China. Better property rights enforcement, population growth, transportation improvements and protected agricultural markets caused the value of land to increase quickly, but real wage rates increased little. Most Taiwanese farmers did own some land but since the poor were more dependent on wages, income inequality increased.

Taiwan Under Nationalist Rule

Taiwan’s economy recovered from the war slower than the Japanese economy. The Chinese Nationalist government took control of Taiwan in 1945 and lost control of their original territory on the mainland in 1949. The Japanese population, which had grown to over five percent of Taiwan’s population (and a much greater proportion of Taiwan’s urban population), was shipped to Japan and the new government confiscated Japanese property creating large public corporations. The late 1940s was a period of civil war in China, and Taiwan also experienced violence and hyperinflation. In 1949, soldiers and refugees from the mainland flooded onto the island increasing Taiwan’s population by about twenty percent. Mainlanders tended to settle in cities and were predominant in the public sector.

In the 1950s, Taiwan was dependent on American aid, which allowed its government to maintain a large military without overburdening the economy. Taiwan’s agricultural economy was left in shambles by the events of the 1940s. It had lost its protected Japanese markets and the low-interest-rate formal-sector loans to which even tenant farmers had access in the 1930s were no longer available. With American help, the government implemented a land reform program. This program (1) sold public land to tenant farmers, (2) limited rent to 37.5% of the expected harvest and (3) severely restricted the size of individual landholdings forcing landlords to sell most of their land to the government in exchange for stocks and bonds valued at 2.5 times the land’s annual expected harvest. This land was then redistributed. The land reform increased equality among the farm population and strengthened government control of the countryside. Its justice and effect on agricultural investment and productivity are still hotly debated.

High-speed growth accompanied by quick industrialization began in the late-1950s. Taiwan became known for its cheap manufactured exports produced by small enterprises bound together by flexible sub-contracting networks. Taiwan’s postwar industrialization is usually attributed to (1) the decline in land per capita, (2) the change in export markets and (3) government policy. Between 1940 and 1962, Taiwan’s population increased at an annual rate of slightly over three percent. This cut the amount of land per capita in half. Taiwan’s agricultural exports had been sold tariff-free at higher-than-world-market prices in pre-war Japan while Taiwan’s only important pre-war manufactured export, imitation panama hats, faced a 25% tariff in the U.S., their primary market. After the war, agricultural products generally faced the greatest trade barriers. As for government policy, Taiwan went through a period of import substitution policy in the 1950s, followed by promotion of manufactured exports in the 1960s and 1970s. Subsidies were available for certain manufactures under both regimes. During the import substitution regime, domestic manufactures were protected both by tariffs and multiple overvalued exchange rates. Under the later export promotion regime, export processing zones were set up in which privileges were extended to businesses which produced products which would not be sold domestically.

Historical research into the “Taiwanese miracle” has focused on government policy and its effects, but statistical data for the first few post-war decades is poor and the overall effect of the various government policies is unclear. During the 1960s and 1970s, real GDP grew about 10% (7% per capita) each year. Most of this growth can be explained by increases in factors of production. Savings rates began rising after the currency was stabilized and reached almost 30% by 1970. Meanwhile, primary education, in which 70% of Taiwanese children had participated under the Japanese, became universal, and students in higher education increased many-fold. Although recent research has emphasized the importance of factor growth in the Asian “miracle economies,” studies show that productivity also grew substantially in Taiwan.

Further Reading

Chang, Han-Yu and Ramon Myers. “Japanese Colonial Development Policy in Taiwan, 1895-1906.” Journal of Asian Studies 22, no. 4 (August 1963): 433-450.

Davidson, James. The Island of Formosa: Past and Present. London: MacMillan & Company, 1903.

Fei, John et.al. Growth with Equity: The Taiwan Case. New York: Oxford University Press, 1979.

Gardella, Robert. Harvesting Mountains: Fujian and the China Tea Trade, 1757-1937. Berkeley: University of California Press, 1994.

Ho, Samuel. Economic Development of Taiwan 1860-1970. New Haven: Yale University Press, 1978.

Ho, Yhi-Min. Agricultural Development of Taiwan, 1903-1960. Nashville: Vanderbilt University Press, 1966.

Ka, Chih-Ming. Japanese Colonialism in Taiwan: Land Tenure, Development, and Dependency, 1895-1945. Boulder: Westview Press, 1995.

Knapp, Ronald, editor. China’s Island Frontier: Studies in the Historical Geography of Taiwan. Honolulu: University Press of Hawaii, 1980.

Li, Kuo-Ting. The Evolution of Policy Behind Taiwan’s Development Success. New Haven: Yale University Press, 1988.

Koo Hui-Wen and Chun-Chieh Wang. “Indexed Pricing: Sugarcane Price Guarantees in Colonial Taiwan, 1930-1940.” Journal of Economic History 59, no. 4 (December 1999): 912-926.

Mazumdar, Sucheta. Sugar and Society in China: Peasants, Technology, and the World Market. Cambridge, MA: Harvard University Asia Center, 1998.

Meskill, Johanna. A Chinese Pioneer Family: The Lins of Wu-feng, Taiwan, 1729-1895. Princeton, NJ: Princeton University Press, 1979.

Ng, Chin-Keong. Trade and Society: The Amoy Network on the China Coast 1683-1735. Singapore: Singapore University Press, 1983.

Olds, Kelly. “The Risk Premium Differential in Japanese-Era Taiwan and Its Effect.” Journal of Institutional and Theoretical Economics 158, no. 3 (September 2002): 441-463.

Olds, Kelly. “The Biological Standard of Living in Taiwan under Japanese Occupation.” Economics and Human Biology, 1 (2003): 1-20.

Olds, Kelly and Ruey-Hua Liu. “Economic Cooperation in Nineteenth-Century Taiwan.” Journal of Institutional and Theoretical Economics 156, no. 2 (June 2000): 404-430.

Rubinstein, Murray, editor. Taiwan: A New History. Armonk, NY: M.E. Sharpe, 1999.

Shepherd, John. Statecraft and Political Economy on the Taiwan Frontier, 1600-1800. Stanford: Stanford University Press, 1993.

Citation: Olds, Kelly. “The Economic History of Taiwan”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-taiwan/