EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Economic History of Malaysia

John H. Drabble, University of Sydney, Australia

General Background

The Federation of Malaysia (see map), formed in 1963, originally consisted of Malaya, Singapore, Sarawak and Sabah. Due to internal political tensions Singapore was obliged to leave in 1965. Malaya is now known as Peninsular Malaysia, and the two other territories on the island of Borneo as East Malaysia. Prior to 1963 these territories were under British rule for varying periods from the late eighteenth century. Malaya gained independence in 1957, Sarawak and Sabah (the latter known previously as British North Borneo) in 1963, and Singapore full independence in 1965. These territories lie between 2 and 6 degrees north of the equator. The terrain consists of extensive coastal plains backed by mountainous interiors. The soils are not naturally fertile but the humid tropical climate subject to monsoonal weather patterns creates good conditions for plant growth. Historically much of the region was covered in dense rainforest (jungle), though much of this has been removed for commercial purposes over the last century leading to extensive soil erosion and silting of the rivers which run from the interiors to the coast.

SINGAPORE

The present government is a parliamentary system at the federal level (located in Kuala Lumpur, Peninsular Malaysia) and at the state level, based on periodic general elections. Each Peninsular state (except Penang and Melaka) has a traditional Malay ruler, the Sultan, one of whom is elected as paramount ruler of Malaysia (Yang dipertuan Agung) for a five-year term.

The population at the end of the twentieth century approximated 22 million and is ethnically diverse, consisting of 57 percent Malays and other indigenous peoples (collectively known as bumiputera), 24 percent Chinese, 7 percent Indians and the balance “others” (including a high proportion of non-citizen Asians, e.g., Indonesians, Bangladeshis, Filipinos) (Andaya and Andaya, 2001, 3-4)

Significance as a Case Study in Economic Development

Malaysia is generally regarded as one of the most successful non-western countries to have achieved a relatively smooth transition to modern economic growth over the last century or so. Since the late nineteenth century it has been a major supplier of primary products to the industrialized countries; tin, rubber, palm oil, timber, oil, liquified natural gas, etc.

However, since about 1970 the leading sector in development has been a range of export-oriented manufacturing industries such as textiles, electrical and electronic goods, rubber products etc. Government policy has generally accorded a central role to foreign capital, while at the same time working towards more substantial participation for domestic, especially bumiputera, capital and enterprise. By 1990 the country had largely met the criteria for a Newly-Industrialized Country (NIC) status (30 percent of exports to consist of manufactured goods). While the Asian economic crisis of 1997-98 slowed growth temporarily, the current plan, titled Vision 2020, aims to achieve “a fully developed industrialized economy by that date. This will require an annual growth rate in real GDP of 7 percent” (Far Eastern Economic Review, Nov. 6, 2003). Malaysia is perhaps the best example of a country in which the economic roles and interests of various racial groups have been pragmatically managed in the long-term without significant loss of growth momentum, despite the ongoing presence of inter-ethnic tensions which have occasionally manifested in violence, notably in 1969 (see below).

The Premodern Economy

Malaysia has a long history of internationally valued exports, being known from the early centuries A.D. as a source of gold, tin and exotics such as birds’ feathers, edible birds’ nests, aromatic woods, tree resins etc. The commercial importance of the area was enhanced by its strategic position athwart the seaborne trade routes from the Indian Ocean to East Asia. Merchants from both these regions, Arabs, Indians and Chinese regularly visited. Some became domiciled in ports such as Melaka [formerly Malacca], the location of one of the earliest local sultanates (c.1402 A.D.) and a focal point for both local and international trade.

From the early sixteenth century the area was increasingly penetrated by European trading interests, first the Portuguese (from 1511), then the Dutch East India Company [VOC](1602) in competition with the English East India Company [EIC] (1600) for the trade in pepper and various spices. By the late eighteenth century the VOC was dominant in the Indonesian region while the EIC acquired bases in Malaysia, beginning with Penang (1786), Singapore (1819) and Melaka (1824). These were major staging posts in the growing trade with China and also served as footholds from which to expand British control into the Malay Peninsula (from 1870), and northwest Borneo (Sarawak from 1841 and North Borneo from 1882). Over these centuries there was an increasing inflow of migrants from China attracted by the opportunities in trade and as a wage labor force for the burgeoning production of export commodities such as gold and tin. The indigenous people also engaged in commercial production (rice, tin), but remained basically within a subsistence economy and were reluctant to offer themselves as permanent wage labor. Overall, production in the premodern economy was relatively small in volume and technologically undeveloped. The capitalist sector, already foreign dominated, was still in its infancy (Drabble, 2000).

The Transition to Capitalist Production

The nineteenth century witnessed an enormous expansion in world trade which, between 1815 and 1914, grew on average at 4-5 percent a year compared to 1 percent in the preceding hundred years. The driving force came from the Industrial Revolution in the West which saw the innovation of large scale factory production of manufactured goods made possible by technological advances, accompanied by more efficient communications (e.g., railways, cars, trucks, steamships, international canals [Suez 1869, Panama 1914], telegraphs) which speeded up and greatly lowered the cost of long distance trade. Industrializing countries required ever-larger supplies of raw materials as well as foodstuffs for their growing populations. Regions such as Malaysia with ample supplies of virgin land and relative proximity to trade routes were well placed to respond to this demand. What was lacking was an adequate supply of capital and wage labor. In both aspects, the deficiency was supplied largely from foreign sources.

As expanding British power brought stability to the region, Chinese migrants started to arrive in large numbers with Singapore quickly becoming the major point of entry. Most arrived with few funds but those able to amass profits from trade (including opium) used these to finance ventures in agriculture and mining, especially in the neighboring Malay Peninsula. Crops such as pepper, gambier, tapioca, sugar and coffee were produced for export to markets in Asia (e.g. China), and later to the West after 1850 when Britain moved toward a policy of free trade. These crops were labor, not capital, intensive and in some cases quickly exhausted soil fertility and required periodic movement to virgin land (Jackson, 1968).

Tin

Besides ample land, the Malay Peninsula also contained substantial deposits of tin. International demand for tin rose progressively in the nineteenth century due to the discovery of a more efficient method for producing tinplate (for canned food). At the same time deposits in major suppliers such as Cornwall (England) had been largely worked out, thus opening an opportunity for new producers. Traditionally tin had been mined by Malays from ore deposits close to the surface. Difficulties with flooding limited the depth of mining; furthermore their activity was seasonal. From the 1840s the discovery of large deposits in the Peninsula states of Perak and Selangor attracted large numbers of Chinese migrants who dominated the industry in the nineteenth century bringing new technology which improved ore recovery and water control, facilitating mining to greater depths. By the end of the century Malayan tin exports (at approximately 52,000 metric tons) supplied just over half the world output. Singapore was a major center for smelting (refining) the ore into ingots. Tin mining also attracted attention from European, mainly British, investors who again introduced new technology – such as high-pressure hoses to wash out the ore, the steam pump and, from 1912, the bucket dredge floating in its own pond, which could operate to even deeper levels. These innovations required substantial capital for which the chosen vehicle was the public joint stock company, usually registered in Britain. Since no major new ore deposits were found, the emphasis was on increased efficiency in production. European operators, again employing mostly Chinese wage labor, enjoyed a technical advantage here and by 1929 accounted for 61 percent of Malayan output (Wong Lin Ken, 1965; Yip Yat Hoong, 1969).

Rubber

While tin mining brought considerable prosperity, it was a non-renewable resource. In the early twentieth century it was the agricultural sector which came to the forefront. The crops mentioned previously had boomed briefly but were hard pressed to survive severe price swings and the pests and diseases that were endemic in tropical agriculture. The cultivation of rubber-yielding trees became commercially attractive as a raw material for new industries in the West, notably for tires for the booming automobile industry especially in the U.S. Previously rubber had come from scattered trees growing wild in the jungles of South America with production only expandable at rising marginal costs. Cultivation on estates generated economies of scale. In the 1870s the British government organized the transport of specimens of the tree Hevea Brasiliensis from Brazil to colonies in the East, notably Ceylon and Singapore. There the trees flourished and after initial hesitancy over the five years needed for the trees to reach productive age, planters Chinese and European rushed to invest. The boom reached vast proportions as the rubber price reached record heights in 1910 (see Fig.1). Average values fell thereafter but investors were heavily committed and planting continued (also in the neighboring Netherlands Indies [Indonesia]). By 1921 the rubber acreage in Malaysia (mostly in the Peninsula) had reached 935 000 hectares (about 1.34 million acres) or some 55 percent of the total in South and Southeast Asia while output stood at 50 percent of world production.

Fig.1. Average London Rubber Prices, 1905-41 (current values)

As a result of this boom, rubber quickly surpassed tin as Malaysia’s main export product, a position that it was to hold until 1980. A distinctive feature of the industry was that the technology of extracting the rubber latex from the trees (called tapping) by an incision with a special knife, and its manufacture into various grades of sheet known as raw or plantation rubber, was easily adopted by a wide range of producers. The larger estates, mainly British-owned, were financed (as in the case of tin mining) through British-registered public joint stock companies. For example, between 1903 and 1912 some 260 companies were registered to operate in Malaya. Chinese planters for the most part preferred to form private partnerships to operate estates which were on average smaller. Finally, there were the smallholdings (under 40 hectares or 100 acres) of which those at the lower end of the range (2 hectares/5 acres or less) were predominantly owned by indigenous Malays who found growing and selling rubber more profitable than subsistence (rice) farming. These smallholders did not need much capital since their equipment was rudimentary and labor came either from within their family or in the form of share-tappers who received a proportion (say 50 percent) of the output. In Malaya in 1921 roughly 60 percent of the planted area was estates (75 percent European-owned) and 40 percent smallholdings (Drabble, 1991, 1).

The workforce for the estates consisted of migrants. British estates depended mainly on migrants from India, brought in under government auspices with fares paid and accommodation provided. Chinese business looked to the “coolie trade” from South China, with expenses advanced that migrants had subsequently to pay off. The flow of immigration was directly related to economic conditions in Malaysia. For example arrivals of Indians averaged 61 000 a year between 1900 and 1920. Substantial numbers also came from the Netherlands Indies.

Thus far, most capitalist enterprise was located in Malaya. Sarawak and British North Borneo had a similar range of mining and agricultural industries in the 19th century. However, their geographical location slightly away from the main trade route (see map) and the rugged internal terrain costly for transport made them less attractive to foreign investment. However, the discovery of oil by a subsidiary of Royal Dutch-Shell starting production from 1907 put Sarawak more prominently in the business of exports. As in Malaya, the labor force came largely from immigrants from China and to a lesser extent Java.

The growth in production for export in Malaysia was facilitated by development of an infrastructure of roads, railways, ports (e.g. Penang, Singapore) and telecommunications under the auspices of the colonial governments, though again this was considerably more advanced in Malaya (Amarjit Kaur, 1985, 1998)

The Creation of a Plural Society

By the 1920s the large inflows of migrants had created a multi-ethnic population of the type which the British scholar, J.S. Furnivall (1948) described as a plural society in which the different racial groups live side by side under a single political administration but, apart from economic transactions, do not interact with each other either socially or culturally. Though the original intention of many migrants was to come for only a limited period (say 3-5 years), save money and then return home, a growing number were staying longer, having children and becoming permanently domiciled in Malaysia. The economic developments described in the previous section were unevenly located, for example, in Malaya the bulk of the tin mines and rubber estates were located along the west coast of the Peninsula. In the boom-times, such was the size of the immigrant inflows that in certain areas they far outnumbered the indigenous Malays. In social and cultural terms Indians and Chinese recreated the institutions, hierarchies and linguistic usage of their countries of origin. This was particularly so in the case of the Chinese. Not only did they predominate in major commercial centers such as Penang, Singapore, and Kuching, but they controlled local trade in the smaller towns and villages through a network of small shops (kedai) and dealerships that served as a pipeline along which export goods like rubber went out and in return imported manufactured goods were brought in for sale. In addition Chinese owned considerable mining and agricultural land. This created a distribution of wealth and division of labor in which economic power and function were directly related to race. In this situation lay the seeds of growing discontent among bumiputera that they were losing their ancestral inheritance (land) and becoming economically marginalized. As long as British colonial rule continued the various ethnic groups looked primarily to government to protect their interests and maintain peaceable relations. An example of colonial paternalism was the designation from 1913 of certain lands in Malaya as Malay Reservations in which only indigenous people could own and deal in property (Lim Teck Ghee, 1977).

Benefits and Drawbacks of an Export Economy

Prior to World War II the international economy was divided very broadly into the northern and southern hemispheres. The former contained most of the industrialized manufacturing countries and the latter the principal sources of foodstuffs and raw materials. The commodity exchange between the spheres was known as the Old International Division of Labor (OIDL). Malaysia’s place in this system was as a leading exporter of raw materials (tin, rubber, timber, oil, etc.) and an importer of manufactures. Since relatively little processing was done on the former prior to export, most of the value-added component in the final product accrued to foreign manufacturers, e.g. rubber tire manufacturers in the U.S.

It is clear from this situation that Malaysia depended heavily on earnings from exports of primary commodities to maintain the standard of living. Rice had to be imported (mainly from Burma and Thailand) because domestic production supplied on average only 40 percent of total needs. As long as export prices were high (for example during the rubber boom previously mentioned), the volume of imports remained ample. Profits to capital and good smallholder incomes supported an expanding economy. There are no official data for Malaysian national income prior to World War II, but some comparative estimates are given in Table 1 which indicate that Malayan Gross Domestic Product (GDP) per person was easily the leader in the Southeast and East Asian region by the late 1920s.

Table 1
GDP per Capita: Selected Asian Countries, 1900-1990
(in 1985 international dollars)

1900 1929 1950 1973 1990
Malaya/Malaysia1 6002 1910 1828 3088 5775
Singapore - - 22763 5372 14441
Burma 523 651 304 446 562
Thailand 594 623 652 1559 3694
Indonesia 617 1009 727 1253 2118
Philippines 735 1106 943 1629 1934
South Korea 568 945 565 1782 6012
Japan 724 1192 1208 7133 13197

Notes: Malaya to 19731; Guesstimate2; 19603

Source: van der Eng (1994).

However, the international economy was subject to strong fluctuations. The levels of activity in the industrialized countries, especially the U.S., were the determining factors here. Almost immediately following World War I there was a depression from 1919-22. Strong growth in the mid and late-1920s was followed by the Great Depression (1929-32). As industrial output slumped, primary product prices fell even more heavily. For example, in 1932 rubber sold on the London market for about one one-hundredth of the peak price in 1910 (Fig.1). The effects on export earnings were very severe; in Malaysia’s case between 1929 and 1932 these dropped by 73 percent (Malaya), 60 percent (Sarawak) and 50 percent (North Borneo). The aggregate value of imports fell on average by 60 percent. Estates dismissed labor and since there was no social security, many workers had to return to their country of origin. Smallholder incomes dropped heavily and many who had taken out high-interest secured loans in more prosperous times were unable to service these and faced the loss of their land.

The colonial government attempted to counteract this vulnerability to economic swings by instituting schemes to restore commodity prices to profitable levels. For the rubber industry this involved two periods of mandatory restriction of exports to reduce world stocks and thus exert upward pressure on market prices. The first of these (named the Stevenson scheme after its originator) lasted from 1 October 1922- 1 November 1928, and the second (the International Rubber Regulation Agreement) from 1 June 1934-1941. Tin exports were similarly restricted from 1931-41. While these measures did succeed in raising world prices, the inequitable treatment of Asian as against European producers in both industries has been debated. The protective policy has also been blamed for “freezing” the structure of the Malaysian economy and hindering further development, for instance into manufacturing industry (Lim Teck Ghee, 1977; Drabble, 1991).

Why No Industrialization?

Malaysia had very few secondary industries before World War II. The little that did appear was connected mainly with the processing of the primary exports, rubber and tin, together with limited production of manufactured goods for the domestic market (e.g. bread, biscuits, beverages, cigarettes and various building materials). Much of this activity was Chinese-owned and located in Singapore (Huff, 1994). Among the reasons advanced are; the small size of the domestic market, the relatively high wage levels in Singapore which made products uncompetitive as exports, and a culture dominated by British trading firms which favored commerce over industry. Overshadowing all these was the dominance of primary production. When commodity prices were high, there was little incentive for investors, European or Asian, to move into other sectors. Conversely, when these prices fell capital and credit dried up, while incomes contracted, thus lessening effective demand for manufactures. W.G. Huff (2002) has argued that, prior to World War II, “there was, in fact, never a good time to embark on industrialization in Malaya.”

War Time 1942-45: The Japanese Occupation

During the Japanese occupation years of World War II, the export of primary products was limited to the relatively small amounts required for the Japanese economy. This led to the abandonment of large areas of rubber and the closure of many mines, the latter progressively affected by a shortage of spare parts for machinery. Businesses, especially those Chinese-owned, were taken over and reassigned to Japanese interests. Rice imports fell heavily and thus the population devoted a large part of their efforts to producing enough food to stay alive. Large numbers of laborers (many of whom died) were conscripted to work on military projects such as construction of the Thai-Burma railroad. Overall the war period saw the dislocation of the export economy, widespread destruction of the infrastructure (roads, bridges etc.) and a decline in standards of public health. It also saw a rise in inter-ethnic tensions due to the harsh treatment meted out by the Japanese to some groups, notably the Chinese, compared to a more favorable attitude towards the indigenous peoples among whom (Malays particularly) there was a growing sense of ethnic nationalism (Drabble, 2000).

Postwar Reconstruction and Independence

The returning British colonial rulers had two priorities after 1945; to rebuild the export economy as it had been under the OIDL (see above), and to rationalize the fragmented administrative structure (see General Background). The first was accomplished by the late 1940s with estates and mines refurbished, production restarted once the labor force had been brought back and adequate rice imports regained. The second was a complex and delicate political process which resulted in the formation of the Federation of Malaya (1948) from which Singapore, with its predominantly Chinese population (about 75%), was kept separate. In Borneo in 1946 the state of Sarawak, which had been a private kingdom of the English Brooke family (so-called “White Rajas”) since 1841, and North Borneo, administered by the British North Borneo Company from 1881, were both transferred to direct rule from Britain. However, independence was clearly on the horizon and in Malaya tensions continued with the guerrilla campaign (called the “Emergency”) waged by the Malayan Communist Party (membership largely Chinese) from 1948-60 to force out the British and set up a Malayan Peoples’ Republic. This failed and in 1957 the Malayan Federation gained independence (Merdeka) under a “bargain” by which the Malays would hold political paramountcy while others, notably Chinese and Indians, were given citizenship and the freedom to pursue their economic interests. The bargain was institutionalized as the Alliance, later renamed the National Front (Barisan Nasional) which remains the dominant political grouping. In 1963 the Federation of Malaysia was formed in which the bumiputera population was sufficient in total to offset the high proportion of Chinese arising from the short-lived inclusion of Singapore (Andaya and Andaya, 2001).

Towards the Formation of a National Economy

Postwar two long-term problems came to the forefront. These were (a) the political fragmentation (see above) which had long prevented a centralized approach to economic development, coupled with control from Britain which gave primacy to imperial as opposed to local interests and (b) excessive dependence on a small range of primary products (notably rubber and tin) which prewar experience had shown to be an unstable basis for the economy.

The first of these was addressed partly through the political rearrangements outlined in the previous section, with the economic aspects buttressed by a report from a mission to Malaya from the International Bank for Reconstruction and Development (IBRD) in 1954. The report argued that Malaya “is now a distinct national economy.” A further mission in 1963 urged “closer economic cooperation between the prospective Malaysia[n] territories” (cited in Drabble, 2000, 161, 176). The rationale for the Federation was that Singapore would serve as the initial center of industrialization, with Malaya, Sabah and Sarawak following at a pace determined by local conditions.

The second problem centered on economic diversification. The IBRD reports just noted advocated building up a range of secondary industries to meet a larger portion of the domestic demand for manufactures, i.e. import-substitution industrialization (ISI). In the interim dependence on primary products would perforce continue.

The Adoption of Planning

In the postwar world the development plan (usually a Five-Year Plan) was widely adopted by Less-Developed Countries (LDCs) to set directions, targets and estimated costs. Each of the Malaysian territories had plans during the 1950s. Malaya was the first to get industrialization of the ISI type under way. The Pioneer Industries Ordinance (1958) offered inducements such as five-year tax holidays, guarantees (to foreign investors) of freedom to repatriate profits and capital etc. A modest degree of tariff protection was granted. The main types of goods produced were consumer items such as batteries, paints, tires, and pharmaceuticals. Just over half the capital invested came from abroad, with neighboring Singapore in the lead. When Singapore exited the federation in 1965, Malaysia’s fledgling industrialization plans assumed greater significance although foreign investors complained of stifling bureaucracy retarding their projects.

Primary production, however, was still the major economic activity and here the problem was rejuvenation of the leading industries, rubber in particular. New capital investment in rubber had slowed since the 1920s, and the bulk of the existing trees were nearing the end of their economic life. The best prospect for rejuvenation lay in cutting down the old trees and replanting the land with new varieties capable of raising output per acre/hectare by a factor of three or four. However, the new trees required seven years to mature. Corporately owned estates could replant progressively, but smallholders could not face such a prolonged loss of income without support. To encourage replanting, the government offered grants to owners, financed by a special duty on rubber exports. The process was a lengthy one and it was the 1980s before replanting was substantially complete. Moreover, many estates elected to switch over to a new crop, oil palms (a product used primarily in foodstuffs), which offered quicker returns. Progress was swift and by the 1960s Malaysia was supplying 20 percent of world demand for this commodity.

Another priority at this time consisted of programs to improve the standard of living of the indigenous peoples, most of whom lived in the rural areas. The main instrument was land development, with schemes to open up large areas (say 100,000 acres or 40 000 hectares) which were then subdivided into 10 acre/4 hectare blocks for distribution to small farmers from overcrowded regions who were either short of land or had none at all. Financial assistance (repayable) was provided to cover housing and living costs until the holdings became productive. Rubber and oil palms were the main commercial crops planted. Steps were also taken to increase the domestic production of rice to lessen the historical dependence on imports.

In the primary sector Malaysia’s range of products was increased from the 1960s by a rapid increase in the export of hardwood timber, mostly in the form of (unprocessed) saw-logs. The markets were mainly in East Asia and Australasia. Here the largely untapped resources of Sabah and Sarawak came to the fore, but the rapid rate of exploitation led by the late twentieth century to damaging effects on both the environment (extensive deforestation, soil-loss, silting, changed weather patterns), and the traditional hunter-gatherer way of life of forest-dwellers (decrease in wild-life, fish, etc.). Other development projects such as the building of dams for hydroelectric power also had adverse consequences in all these respects (Amarjit Kaur, 1998; Drabble, 2000; Hong, 1987).

A further major addition to primary exports came from the discovery of large deposits of oil and natural gas in East Malaysia, and off the east coast of the Peninsula from the 1970s. Gas was exported in liquified form (LNG), and was also used domestically as a substitute for oil. At peak values in 1982, petroleum and LNG provided around 29 percent of Malaysian export earnings but had declined to 18 percent by 1988.

Industrialization and the New Economic Policy 1970-90

The program of industrialization aimed primarily at the domestic market (ISI) lost impetus in the late 1960s as foreign investors, particularly from Britain switched attention elsewhere. An important factor here was the outbreak of civil disturbances in May 1969, following a federal election in which political parties in the Peninsula (largely non-bumiputera in membership) opposed to the Alliance did unexpectedly well. This brought to a head tensions, which had been rising during the 1960s over issues such as the use of the national language, Malay (Bahasa Malaysia) as the main instructional medium in education. There was also discontent among Peninsular Malays that the economic fruits since independence had gone mostly to non-Malays, notably the Chinese. The outcome was severe inter-ethnic rioting centered in the federal capital, Kuala Lumpur, which led to the suspension of parliamentary government for two years and the implementation of the New Economic Policy (NEP).

The main aim of the NEP was a restructuring of the Malaysian economy over two decades, 1970-90 with the following aims:

  1. to redistribute corporate equity so that the bumiputera share would rise from around 2 percent to 30 percent. The share of other Malaysians would increase marginally from 35 to 40 percent, while that of foreigners would fall from 63 percent to 30 percent.
  2. to eliminate the close link between race and economic function (a legacy of the colonial era) and restructure employment so that that the bumiputera share in each sector would reflect more accurately their proportion of the total population (roughly 55 percent). In 1970 this group had about two-thirds of jobs in the primary sector where incomes were generally lowest, but only 30 percent in the secondary sector. In high-income middle class occupations (e.g. professions, management) the share was only 13 percent.
  3. To eradicate poverty irrespective of race. In 1970 just under half of all households in Peninsular Malaysia had incomes below the official poverty line. Malays accounted for about 75 percent of these.

The principle underlying these aims was that the redistribution would not result in any one group losing in absolute terms. Rather it would be achieved through the process of economic growth, i.e. the economy would get bigger (more investment, more jobs, etc.). While the primary sector would continue to receive developmental aid under the successive Five Year Plans, the main emphasis was a switch to export-oriented industrialization (EOI) with Malaysia seeking a share in global markets for manufactured goods. Free Trade Zones (FTZs) were set up in places such as Penang where production was carried on with the undertaking that the output would be exported. Firms locating there received concessions such as duty-free imports of raw materials and capital goods, and tax concessions, aimed at primarily at foreign investors who were also attracted by Malaysia’s good facilities, relatively low wages and docile trade unions. A range of industries grew up; textiles, rubber and food products, chemicals, telecommunications equipment, electrical and electronic machinery/appliances, car assembly and some heavy industries, iron and steel. As with ISI, much of the capital and technology was foreign, for example the Japanese firm Mitsubishi was a partner in a venture to set up a plant to assemble a Malaysian national car, the Proton, from mostly imported components (Drabble, 2000).

Results of the NEP

Table 2 below shows the outcome of the NEP in the categories outlined above.

Table 2
Restructuring under the NEP, 1970-90

1970 1990
Wealth Ownership (%) Bumiputera 2.0 20.3
Other Malaysians 34.6 54.6
Foreigners 63.4 25.1
Employment
(%) of total
workers
in each
sector
Primary sector (agriculture, mineral
extraction, forest products and fishing)
Bumiputera 67.6 [61.0]* 71.2 [36.7]*
Others 32.4 28.8
Secondary sector
(manufacturing and construction)
Bumiputera 30.8 [14.6]* 48.0 [26.3]*
Others 69.2 52.0
Tertiary sector (services) Bumiputera 37.9 [24.4]* 51.0 [36.9]*
Others 62.1 49.0

Note: [ ]* is the proportion of the ethnic group thus employed. The “others” category has not been disaggregated by race to avoid undue complexity.
Source: Drabble, 2000, Table 10.9.

Section (a) shows that, overall, foreign ownership fell substantially more than planned, while that of “Other Malaysians” rose well above the target. Bumiputera ownership appears to have stopped well short of the 30 percent mark. However, other evidence suggests that in certain sectors such as agriculture/mining (35.7%) and banking/insurance (49.7%) bumiputera ownership of shares in publicly listed companies had already attained a level well beyond the target. Section (b) indicates that while bumiputera employment share in primary production increased slightly (due mainly to the land schemes), as a proportion of that ethnic group it declined sharply, while rising markedly in both the secondary and tertiary sectors. In middle class employment the share rose to 27 percent.

As regards the proportion of households below the poverty line, in broad terms the incidence in Malaysia fell from approximately 49 percent in 1970 to 17 percent in 1990, but with large regional variations between the Peninsula (15%), Sarawak (21 %) and Sabah (34%) (Drabble, 2000, Table 13.5). All ethnic groups registered big falls, but on average the non-bumiputera still enjoyed the lowest incidence of poverty. By 2002 the overall level had fallen to only 4 percent.

The restructuring of the Malaysian economy under the NEP is very clear when we look at the changes in composition of the Gross Domestic Product (GDP) in Table 3 below.

Table 3
Structural Change in GDP 1970-90 (% shares)

Year Primary Secondary Tertiary
1970 44.3 18.3 37.4
1990 28.1 30.2 41.7

Source: Malaysian Government, 1991, Table 3-2.

Over these three decades Malaysia accomplished a transition from a primary product-dependent economy to one in which manufacturing industry had emerged as the leading growth sector. Rubber and tin, which accounted for 54.3 percent of Malaysian export value in 1970, declined sharply in relative terms to a mere 4.9 percent in 1990 (Crouch, 1996, 222).

Factors in the structural shift

The post-independence state played a leading role in the transformation. The transition from British rule was smooth. Apart from the disturbances in 1969 government maintained a firm control over the administrative machinery. Malaysia’s Five Year Development plans were a model for the developing world. Foreign capital was accorded a central role, though subject to the requirements of the NEP. At the same time these requirements discouraged domestic investors, the Chinese especially, to some extent (Jesudason, 1989).

Development was helped by major improvements in education and health. Enrolments at the primary school level reached approximately 90 percent by the 1970s, and at the secondary level 59 percent of potential by 1987. Increased female enrolments, up from 39 percent to 58 percent of potential from 1975 to 1991, were a notable feature, as was the participation of women in the workforce which rose to just over 45 percent of total employment by 1986/7. In the tertiary sector the number of universities increased from one to seven between 1969 and 1990 and numerous technical and vocational colleges opened. Bumiputera enrolments soared as a result of the NEP policy of redistribution (which included ethnic quotas and government scholarships). However, tertiary enrolments totaled only 7 percent of the age group by 1987. There was an “educational-occupation mismatch,” with graduates (bumiputera especially) preferring jobs in government, and consequent shortfalls against strong demand for engineers, research scientists, technicians and the like. Better living conditions (more homes with piped water and more rural clinics, for example) led to substantial falls in infant mortality, improved public health and longer life-expectancy, especially in Peninsular Malaysia (Drabble, 2000, 248, 284-6).

The quality of national leadership was a crucial factor. This was particularly so during the NEP. The leading figure here was Dr Mahathir Mohamad, Malaysian Prime Minister from 1981-2003. While supporting the NEP aim through positive discrimination to give bumiputera an economic stake in the country commensurate with their indigenous status and share in the population, he nevertheless emphasized that this should ultimately lead them to a more modern outlook and ability to compete with the other races in the country, the Chinese especially (see Khoo Boo Teik, 1995). There were, however, some paradoxes here. Mahathir was a meritocrat in principle, but in practice this period saw the spread of “money politics” (another expression for patronage) in Malaysia. In common with many other countries Malaysia embarked on a policy of privatization of public assets, notably in transportation (e.g. Malaysian Airlines), utilities (e.g. electricity supply) and communications (e.g. television). This was done not through an open process of competitive tendering but rather by a “nebulous ‘first come, first served’ principle” (Jomo, 1995, 8) which saw ownership pass directly to politically well-connected businessmen, mainly bumiputera, at relatively low valuations.

The New Development Policy

Positive action to promote bumiputera interests did not end with the NEP in 1990, this was followed in 1991 by the New Development Policy (NDP), which emphasized assistance only to “Bumiputera with potential, commitment and good track records” (Malaysian Government, 1991, 17) rather than the previous blanket measures to redistribute wealth and employment. In turn the NDP was part of a longer-term program known as Vision 2020. The aim here is to turn Malaysia into a fully industrialized country and to quadruple per capita income by the year 2020. This will require the country to continue ascending the technological “ladder” from low- to high-tech types of industrial production, with a corresponding increase in the intensity of capital investment and greater retention of value-added (i.e. the value added to raw materials in the production process) by Malaysian producers.

The Malaysian economy continued to boom at historically unprecedented rates of 8-9 percent a year for much of the 1990s (see next section). There was heavy expenditure on infrastructure, for example extensive building in Kuala Lumpur such as the Twin Towers (currently the highest buildings in the world). The volume of manufactured exports, notably electronic goods and electronic components increased rapidly.

Asian Financial Crisis, 1997-98

The Asian financial crisis originated in heavy international currency speculation leading to major slumps in exchange rates beginning with the Thai baht in May 1997, spreading rapidly throughout East and Southeast Asia and severely affecting the banking and finance sectors. The Malaysian ringgit exchange rate fell from RM 2.42 to 4.88 to the U.S. dollar by January 1998. There was a heavy outflow of foreign capital. To counter the crisis the International Monetary Fund (IMF) recommended austerity changes to fiscal and monetary policies. Some countries (Thailand, South Korea, and Indonesia) reluctantly adopted these. The Malaysian government refused and implemented independent measures; the ringgitbecame non-convertible externally and was pegged at RM 3.80 to the US dollar, while foreign capital repatriated before staying at least twelve months was subject to substantial levies. Despite international criticism these actions stabilized the domestic situation quite effectively, restoring net growth (see next section) especially compared to neighboring Indonesia.

Rates of Economic Growth

Malaysia’s economic growth in comparative perspective from 1960-90 is set out in Table 4 below.

Table 4
Asia-Pacific Region: Growth of Real GDP (annual average percent)

1960-69 1971-80 1981-89
Japan 10.9 5.0 4.0
Asian “Tigers”
Hong Kong 10.0 9.5 7.2
South Korea 8.5 8.7 9.3
Singapore 8.9 9.0 6.9
Taiwan 11.6 9.7 8.1
ASEAN-4
Indonesia 3.5 7.9 5.2
Malaysia 6.5 8.0 5.4
Philippines 4.9 6.2 1.7
Thailand 8.3 9.9 7.1

Source: Drabble, 2000, Table 10.2; figures for Japan are for 1960-70, 1971-80, and 1981-90.

The data show that Japan, the dominant Asian economy for much of this period, progressively slowed by the 1990s (see below). The four leading Newly Industrialized Countries (Asian “Tigers” as they were called) followed EOF strategies and achieved very high rates of growth. Among the four ASEAN (Association of Southeast Asian Nations formed 1967) members, again all adopting EOI policies, Thailand stood out followed closely by Malaysia. Reference to Table 1 above shows that by 1990 Malaysia, while still among the leaders in GDP per head, had slipped relative to the “Tigers.”

These economies, joined by China, continued growth into the 1990s at such high rates (Malaysia averaged around 8 percent a year) that the term “Asian miracle” became a common method of description. The exception was Japan which encountered major problems with structural change and an over-extended banking system. Post-crisis the countries of the region have started recovery but at differing rates. The Malaysian economy contracted by nearly 7 percent in 1998, recovered to 8 percent growth in 2000, slipped again to under 1 percent in 2001 and has since stabilized at between 4 and 5 percent growth in 2002-04.

The new Malaysian Prime Minister (since October 2003), Abdullah Ahmad Badawi, plans to shift the emphasis in development to smaller, less-costly infrastructure projects and to break the previous dominance of “money politics.” Foreign direct investment will still be sought but priority will be given to nurturing the domestic manufacturing sector.

Further improvements in education will remain a key factor (Far Eastern Economic Review, Nov.6, 2003).

Overview

Malaysia owes its successful historical economic record to a number of factors. Geographically it lies close to major world trade routes bringing early exposure to the international economy. The sparse indigenous population and labor force has been supplemented by immigrants, mainly from neighboring Asian countries with many becoming permanently domiciled. The economy has always been exceptionally open to external influences such as globalization. Foreign capital has played a major role throughout. Governments, colonial and national, have aimed at managing the structure of the economy while maintaining inter-ethnic stability. Since about 1960 the economy has benefited from extensive restructuring with sustained growth of exports from both the primary and secondary sectors, thus gaining a double impetus.

However, on a less positive assessment, the country has so far exchanged dependence on a limited range of primary products (e.g. tin and rubber) for dependence on an equally limited range of manufactured goods, notably electronics and electronic components (59 percent of exports in 2002). These industries are facing increasing competition from lower-wage countries, especially India and China. Within Malaysia the distribution of secondary industry is unbalanced, currently heavily favoring the Peninsula. Sabah and Sarawak are still heavily dependent on primary products (timber, oil, LNG). There is an urgent need to continue the search for new industries in which Malaysia can enjoy a comparative advantage in world markets, not least because inter-ethnic harmony depends heavily on the continuance of economic prosperity.

Select Bibliography

General Studies

Amarjit Kaur. Economic Change in East Malaysia: Sabah and Sarawak since 1850. London: Macmillan, 1998.

Andaya, L.Y. and Andaya, B.W. A History of Malaysia, second edition. Basingstoke: Palgrave, 2001.

Crouch, Harold. Government and Society in Malaysia. Sydney: Allen and Unwin, 1996.

Drabble, J.H. An Economic History of Malaysia, c.1800-1990: The Transition to Modern Economic Growth. Basingstoke: Macmillan and New York: St. Martin’s Press, 2000.

Furnivall, J.S. Colonial Policy and Practice: A Comparative Study of Burma and Netherlands India. Cambridge (UK), 1948.

Huff, W.G. The Economic Growth of Singapore: Trade and Development in the Twentieth Century. Cambridge: Cambridge University Press, 1994.

Jomo, K.S. Growth and Structural Change in the Malaysian Economy. London: Macmillan, 1990.

Industries/Transport

Alavi, Rokiah. Industrialization in Malaysia: Import Substitution and Infant Industry Performance. London: Routledge, 1966.

Amarjit Kaur. Bridge and Barrier: Transport and Communications in Colonial Malaya 1870-1957. Kuala Lumpur: Oxford University Press, 1985.

Drabble, J.H. Rubber in Malaya 1876-1922: The Genesis of the Industry. Kuala Lumpur: Oxford University Press, 1973.

Drabble, J.H. Malayan Rubber: The Interwar Years. London: Macmillan, 1991.

Huff, W.G. “Boom or Bust Commodities and Industrialization in Pre-World War II Malaya.” Journal of Economic History 62, no. 4 (2002): 1074-1115.

Jackson, J.C. Planters and Speculators: European and Chinese Agricultural Enterprise in Malaya 1786-1921. Kuala Lumpur: University of Malaya Press, 1968.

Lim Teck Ghee. Peasants and Their Agricultural Economy in Colonial Malaya, 1874-1941. Kuala Lumpur: Oxford University Press, 1977.

Wong Lin Ken. The Malayan Tin Industry to 1914. Tucson: University of Arizona Press, 1965.

Yip Yat Hoong. The Development of the Tin Mining Industry of Malaya. Kuala Lumpur: University of Malaya Press, 1969.

New Economic Policy

Jesudason, J.V. Ethnicity and the Economy: The State, Chinese Business and Multinationals in Malaysia. Kuala Lumpur: Oxford University Press, 1989.

Jomo, K.S., editor. Privatizing Malaysia: Rents, Rhetoric, Realities. Boulder, CO: Westview Press, 1995.

Khoo Boo Teik. Paradoxes of Mahathirism: An Intellectual Biography of Mahathir Mohamad. Kuala Lumpur: Oxford University Press, 1995.

Vincent, J.R., R.M. Ali and Associates. Environment and Development in a Resource-Rich Economy: Malaysia under the New Economic Policy. Cambridge, MA: Harvard University Press, 1997

Ethnic Communities

Chew, Daniel. Chinese Pioneers on the Sarawak Frontier, 1841-1941. Kuala Lumpur: Oxford University Press, 1990.

Gullick, J.M. Malay Society in the Late Nineteenth Century. Kuala Lumpur: Oxford University Press, 1989.

Hong, Evelyne. Natives of Sarawak: Survival in Borneo’s Vanishing Forests. Penang: Institut Masyarakat Malaysia, 1987.

Shamsul, A.B. From British to Bumiputera Rule. Singapore: Institute of Southeast Asian Studies, 1986.

Economic Growth

Far Eastern Economic Review. Hong Kong. An excellent weekly overview of current regional affairs.

Malaysian Government. The Second Outline Perspective Plan, 1991-2000. Kuala Lumpur: Government Printer, 1991.

Van der Eng, Pierre. “Assessing Economic Growth and the Standard of Living in Asia 1870-1990.” Milan, Eleventh International Economic History Congress, 1994.

Citation: Drabble, John. “The Economic History of Malaysia”. EH.Net Encyclopedia, edited by Robert Whaples. July 31, 2004. URL http://eh.net/encyclopedia/economic-history-of-malaysia/

Labor Unions in the United States

Gerald Friedman, University of Massachusetts at Amherst

Unions and Collective Action

In capitalist labor markets, which developed in the nineteenth-century in the United States and Western Europe, workers exchange their time and effort for wages. But even while laboring under the supervision of others, wage earners have never been slaves, because they have recourse from abuse. They can quit to seek better employment. Or they are free to join with others to take collective action, forming political movements or labor unions. By the end of the nineteenth century, labor unions and labor-oriented political parties had become major forces influencing wages and working conditions. This article explores the nature and development of labor unions in the United States. It reviews the growth and recent decline of the American labor movement and makes comparisons with the experience of foreign labor unions to clarify particular aspects of the history of labor unions in the United States.

Unions and the Free-Rider Problem

Quitting, exit, is straightforward, a simple act for individuals unhappy with their employment. By contrast, collective action, such as forming a labor union, is always difficult because it requires that individuals commit themselves to produce “public goods” enjoyed by all, including those who “free ride” rather than contribute to the group effort. If the union succeeds, free riders receive the same benefits as do activists; but if it fails, the activists suffer while those who remained outside lose nothing. Because individualist logic leads workers to “free ride,” unions cannot grow by appealing to individual self-interest (Hirschman, 1970; 1982; Olson, 1966; Gamson, 1975).

Union Growth Comes in Spurts

Free riding is a problem for all collective movements, including Rotary Clubs, the Red Cross, and the Audubon Society. But unionization is especially difficult because unions must attract members against the opposition of often-hostile employers. Workers who support unions sacrifice money and risk their jobs, even their lives. Success comes only when large numbers simultaneously follow a different rationality. Unions must persuade whole groups to abandon individualism to throw themselves into the collective project. Rarely have unions grown incrementally, gradually adding members. Instead, workers have joined unions en masse in periods of great excitement, attracted by what the French sociologist Emile Durkheim labeled “collective effervescence” or the joy of participating in a common project without regard for individual interest. Growth has come in spurts, short periods of social upheaval punctuated by major demonstrations and strikes when large numbers see their fellow workers publicly demonstrating a shared commitment to the collective project. Union growth, therefore, is concentrated in short periods of dramatic social upheaval; in the thirteen countries listed in Tables 1 and 2, 67 percent of growth comes in only five years, and over 90 percent in only ten years. As Table 3 shows, in these thirteen countries, unions grew by over 10 percent a year in years with the greatest strike activity but by less than 1 percent a year in the years with the fewest strikers (Friedman, 1999; Shorter and Tilly, 1974; Zolberg, 1972).

Table 1
Union Members per 100 Nonagricultural Workers, 1880-1985: Selected Countries

Year Canada US Austria Denmark France Italy Germany Netherlands Norway Sweden UK Australia Japan
1880 n.a. 1.8 n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a.
1900 4.6 7.5 n.a. 20.8 5.0 n.a. 7.0 n.a. 3.4 4.8 12.7 n.a. n.a.
1914 8.6 10.5 n.a. 25.1 8.1 n.a. 16.9 17.0 13.6 9.9 23.0 32.8 n.a.
1928 11.6 9.9 41.7 39.7 8.0 n.a. 32.5 26.0 17.4 32.0 25.6 46.2 n.a.
1939 10.9 20.7 n.a. 51.8 22.4 n.a. n.a. 32.5 57.0 53.6 31.6 39.2 n.a.
1947 24.6 31.4 64.6 55.9 40.0 n.a. 29.1 40.4 55.1 64.6 44.5 52.9 45.3
1950 26.3 28.4 62.3 58.1 30.2 49.0 33.1 43.0 58.4 67.7 44.1 56.0 46.2
1960 28.3 30.4 63.4 64.4 20.0 29.6 37.1 41.8 61.5 73.0 44.2 54.5 32.2
1975 35.6 26.4 58.5 66.6 21.4 50.1 38.2 39.1 60.5 87.2 51.0 54.7 34.4
1985 33.7 18.9 57.8 82.2 14.5 51.0 39.3 28.6 65.3 103.0 44.2 51.5 28.9

Note: This table shows the unionization rate, the share of nonagricultural workers belonging to unions, in different countries in different years, 1880-1985. Because union membership often includes unemployed and retired union members it may exceed the number of employed workers, giving a unionization rate of greater than 100 percent.

Table 2
Union Growth in Peak and Other Years

Country Years Membership Growth Share of Growth (%) Excess Growth (%)
Top 5 Years Top 10 Years All Years 5 Years 10 Years 5 Years 10 Years
Australia 83 720 000 1 230 000 3 125 000 23.0 39.4 17.0 27.3
Austria 52 5 411 000 6 545 000 3 074 000 176.0 212.9 166.8 194.4
Canada 108 855 000 1 532 000 4 028 000 21.2 38.0 16.6 28.8
Denmark 85 521 000 795 000 1 883 000 27.7 42.2 21.8 30.5
France 92 6 605 000 7 557 000 2 872 000 230.0 263.1 224.5 252.3
Germany 82 10 849 000 13 543 000 9 120 000 119.0 148.5 112.9 136.3
Italy 38 3 028 000 4 671 000 3 713 000 81.6 125.8 68.4 99.5
Japan 43 4 757 000 6 692 000 8 983 000 53.0 74.5 41.3 51.2
Netherlands 71 671 000 1 009 000 1 158 000 57.9 87.1 50.9 73.0
Norway 85 304 000 525 000 1 177 000 25.8 44.6 19.9 32.8
Sweden 99 633 000 1 036 000 3 859 000 16.4 26.8 11.4 16.7
UK 96 4 929 000 8 011 000 8 662 000 56.9 92.5 51.7 82.1
US 109 10 247 000 14 796 000 22 293 000 46.0 66.4 41.4 57.2
Total 1043 49 530 000 67 942 000 73 947 000 67.0 91.9 60.7 79.4

Note: This table shows that most union growth comes in a few years. Union membership growth (net of membership losses) has been calculated for each country for each year. Years were then sorted for each country according to membership growth. This table reports growth for each country for the five and the ten years with the fastest growth and compares this with total growth over all years for which data are available. Excess growth has been calculated as the difference between the share of growth in the top five or ten years and the share that would have come in these periods if growth had been distributed evenly across all years.

Note that years of rapid growth are not necessarily contiguous. There can be more growth in years of rapid growth than over the entire period. This is because some is temporary when years of rapid growth are followed by years of decline.

Sources: Bain and Price (1980): 39, Visser (1989)

Table 3
Impact of Strike Activity on Union Growth
Average Union Membership Growth in Years Sorted by Proportion of Workers Striking

Country Striker Rate Quartile Change
Lowest Third Second Highest
Australia 5.1 2.5 4.5 2.7 -2.4
Austria 0.5 -1.9 0.4 2.4 1.9
Canada 1.3 1.9 2.3 15.8 14.5
Denmark 0.3 1.1 3.0 11.3 11.0
France 0.0 2.1 5.6 17.0 17.0
Germany -0.2 0.4 1.3 20.3 20.5
Italy -2.2 -0.3 2.3 5.8 8.0
Japan -0.2 5.1 3.0 4.3 4.5
Netherlands -0.9 1.2 3.5 6.3 7.2
Norway 1.9 4.3 8.6 10.3 8.4
Sweden 2.5 3.2 5.9 16.9 14.4
UK 1.7 1.7 1.9 3.4 1.7
US -0.5 0.6 2.1 19.9 20.4
Total: Average 0.72 1.68 3.42 10.49 9.78

Note: This table shows that except in Australia unions grew fastest in years with large number of strikers. The proportion of workers striking was calculated for each country for each year as the number of strikers divided by the nonagricultural labor force. Years were then sorted into quartiles, each including one-fourth of the years, according to this striker rate statistic. The average annual union membership growth rate was then calculated for each quartile as the mean of the growth rate in each year in the quartile.

Rapid Union Growth Provokes a Hostile Reaction

These periods of rapid union growth end because social upheaval provokes a hostile reaction. Union growth leads employers to organize, to discover their own collective interests. Emulating their workers, they join together to discharge union activists, to support each other in strikes, and to demand government action against unions. This rising opposition ends periods of rapid union growth, beginning a new phase of decline followed by longer periods of stagnant membership. The weakest unions formed during the union surge succumb to the post-boom reaction; but if enough unions survive they leave a movement larger and broader than before.

Early Labor Unions, Democrats and Socialists

Guilds

Before modern labor unions, guilds united artisans and their employees. Craftsmen did the work of early industry, “masters” working beside “journeymen” and apprentices in small workplaces. Throughout the cities and towns of medieval Europe, guilds regulated production by setting minimum prices and quality, and capping wages, employment, and output. Controlled by independent craftsmen, “masters” who employed journeymen and trained apprentices, guilds regulated industry to protect the comfort and status of the masters. Apprentices and journeymen benefited from guild restrictions only when they advanced to master status.

Guild power was gradually undermined in the early-modern period. Employing workers outside the guild system, including rural workers and semiskilled workers in large urban workplaces, merchants transformed medieval industry. By the early 1800s, few could anticipate moving up to becoming a master artisan or owning their own establishment. Instead, facing the prospect of a lifetime of wage labor punctuated by periods of unemployment, some wage earners began to seek a collective regulation of their individual employment (Thompson, 1966; Scott, 1974; Dawley, 1976; Sewell, 1980; Wilentz, 1984; Blewett, 1988).

The labor movement within the broader movement for democracy

This new wage-labor regime led to the modern labor movement. Organizing propertyless workers who were laboring for capitalists, organized labor formed one wing of a broader democratic movement struggling for equality and for the rights of commoners (Friedman, 1998). Within the broader democratic movement for legal and political equality, labor fought the rise of a new aristocracy that controlled the machinery of modern industry just as the old aristocracy had monopolized land. Seen in this light, the fundamental idea of the labor movement, that employees should have a voice in the management of industry, is comparable to the demand that citizens should have a voice in the management of public affairs. Democratic values do not, by any means, guarantee that unions will be fair and evenhanded to all workers. In the United States, by reserving good jobs for their members, unions of white men sometimes contributed to the exploitation of women and nonwhites. Democracy only means that exploitation will be carried out at the behest of a political majority rather than at the say of an individual capitalist (Roediger, 1991; Arnesen, 2001; Foner, 1974; 1979; Milkman, 1985).

Craft unions’ strategy

Workers formed unions to voice their interests against their employers, and also against other workers. Rejecting broad alliances along class lines, alliances uniting workers on the basis of their lack of property and their common relationship with capitalists, craft unions followed a narrow strategy, uniting workers with the same skill against both the capitalists and against workers in different trades. By using their monopoly of knowledge of the work process to restrict access to the trade, craft unions could have a strong bargaining position that was enhanced by alliances with other craftsmen to finance long strikes. A narrow craft strategy was followed by the first successful unions throughout Europe and America, especially in small urban shops using technologies that still depended on traditional specialized skills, including printers, furniture makers, carpenters, gold beaters and jewelry makers, iron molders, engineers, machinists, and plumbers. Craft unions’ characteristic action was the small, local strike, the concerted withdrawal of labor by a few workers critical to production. Typically, craft unions would present a set of demands to local employers on a “take-it-or-leave-it” basis; either the employer accepted their demands or fought a contest of strength to determine whether the employers could do without the skilled workers for longer than the workers could manage without their jobs.

The craft strategy offered little to the great masses of workers. Because it depends on restricting access to trades it could not be applied by common laborers, who were untrained, nor by semi-skilled employees in modern mass-production establishments whose employers trained them on-the-job. Shunned by craft unions, most women and African-Americans in the United States were crowded into nonunion occupations. Some sought employment as strikebreakers in occupations otherwise monopolized by craft unions controlled by white, native-born males (Washington, 1913; Whatley, 1993).

Unions among unskilled workers

To form unions, the unskilled needed a strategy of the weak that would utilize their numbers rather than specialized knowledge and accumulated savings. Inclusive unions have succeeded but only when they attract allies among politicians, state officials, and the affluent public. Sponsoring unions and protecting them from employer repression, allies can allow organization among workers without specialized skills. When successful, inclusive unions can grow quickly in mass mobilization of common laborers. This happened, for example, in Germany at the beginning of the Weimar Republic, during the French Popular Front of 1936-37, and in the United States during the New Deal of the 1930s. These were times when state support rewarded inclusive unions for organizing the unskilled. The bill for mass mobilization usually came later. Each boom was followed by a reaction against the extensive promises of the inclusive labor movement when employers and conservative politicians worked to put labor’s genie back in the bottle.

Solidarity and the Trade Unions

Unionized occupations of the late 1800s

By the late-nineteenth century, trade unions had gained a powerful position in several skilled occupations in the United States and elsewhere. Outside of mining, craft unions were formed among well-paid skilled craft workers — workers whom historian Eric Hobsbawm labeled the “labor aristocracy” (Hobsbawm, 1964; Geary, 1981). In 1892, for example, nearly two-thirds of British coal miners were union members, as were a third of machinists, millwrights and metal workers, cobblers and shoe makers, glass workers, printers, mule spinners, and construction workers (Bain and Price, 1980). French miners had formed relatively strong unions, as had skilled workers in the railroad operating crafts, printers, jewelry makers, cigar makers, and furniture workers (Friedman, 1998). Cigar makers, printers, furniture workers, some construction and metal craftsmen took the lead in early German unions (Kocka, 1986). In the United States, there were about 160,000 union members in 1880, including 120,000 belonging to craft unions, including carpenters, engineers, furniture makers, stone-cutters, iron puddlers and rollers, printers, and several railroad crafts. Another 40,000 belonged to “industrial” unions organized without regard for trade. About half of these were coal miners; most of the rest belonged to the Knights of Labor (KOL) (Friedman, 1999).

The Knights of Labor

In Europe, these craft organizations were to be the basis of larger, mass unions uniting workers without regard for trade or, in some cases, industry (Ansell, 2001). This process began in the United States in the 1880s when craft workers in the Knights of Labor reached out to organize more broadly. Formed by skilled male, native-born garment cutters in 1869, the Knights of Labor would seem an odd candidate to mobilize the mass of unskilled workers. But from a few Philadelphia craft workers, the Knights grew to become a national and even international movement. Membership reached 20,000 in 1881 and grew to 100,000 in 1885. Then, in 1886, when successful strikes on some western railroads attracted a mass of previously unorganized unskilled workers, the KOL grew to a peak membership of a million workers. For a brief time, the Knights of Labor was a general movement of the American working class (Ware, 1929; Voss, 1993).

The KOL became a mass movement with an ideology and program that united workers without regard for occupation, industry, race or gender (Hattam, 1993). Never espousing Marxist or socialist doctrines, the Knights advanced an indigenous form of popular American radicalism, a “republicanism” that would overcome social problems by extending democracy to the workplace. Valuing citizens according to their work, their productive labor, the Knights were true heirs of earlier bourgeois radicals. Open to all producers, including farmers and other employers, they excluded only those seen to be parasitic on the labor of producers — liquor dealers, gamblers, bankers, stock manipulators and lawyers. Welcoming all others without regard for race, gender, or skill, the KOL was the first American labor union to attract significant numbers of women, African-Americans, and the unskilled (Foner, 1974; 1979; Rachleff, 1984).

The KOL’s strategy

In practice, most KOL local assemblies acted like craft unions. They bargained with employers, conducted boycotts, and called members out on strike to demand higher wages and better working conditions. But unlike craft unions that depended on the bargaining leverage of a few strategically positioned workers, the KOL’s tactics reflected its inclusive and democratic vision. Without a craft union’s resources or control over labor supply, the Knights sought to win labor disputes by widening them to involve political authorities and the outside public able to pressure employers to make concessions. Activists hoped that politicizing strikes would favor the KOL because its large membership would tempt ambitious politicians while its members’ poverty drew public sympathy.

In Europe, a strategy like that of the KOL succeeded in promoting the organization of inclusive unions. But it failed in the United States. Comparing the strike strategies of trade unions and the Knights provides insight into the survival and eventual success of the trade unions and their confederation, the American Federation of Labor (AFL) in late-nineteenth century America. Seeking to transform industrial relations, local assemblies of the KOL struck frequently with large but short strikes involving skilled and unskilled workers. The Knights’ industrial leverage depended on political and social influence. It could succeed where trade unions would not go because the KOL strategy utilized numbers, the one advantage held by common laborers. But this strategy could succeed only where political authorities and the outside public might sympathize with labor. Later industrial and regional unions tried the same strategy, conducting short but large strikes. By demonstrating sufficient numbers and commitment, French and Italian unions, for example, would win from state officials concessions they could not force from recalcitrant employers (Shorter and Tilly, 1974; Friedman, 1998). But compared with the small strikes conducted by craft unions, “solidarity” strikes must walk a fine line, aggressive enough to draw attention but not so threatening to provoke a hostile reaction from threatened authorities. Such a reaction doomed the KOL.

The Knights’ collapse in 1886

In 1886, the Knights became embroiled in a national general strike demanding an eight-hour workday, the world’s first May Day. This led directly to the collapse of the KOL. The May Day strike wave in 1886 and the bombing at Haymarket Square in Chicago provoked a “red scare” of historic proportions driving membership down to half a million in September 1887. Police in Chicago, for example, broke up union meetings, seized union records, and even banned the color red from advertisements. The KOL responded politically, sponsoring a wave of independent labor parties in the elections of 1886 and supporting the Populist Party in 1890 (Fink, 1983). But even relatively strong showings by these independent political movements could not halt the KOL’s decline. By 1890, its membership had fallen by half again, and it fell to under 50,000 members by 1897.

Unions and radical political movements in Europe in the late 1800s

The KOL spread outside the United States, attracting an energetic following in the Canada, the United Kingdom, France, and other European countries. Industrial and regional unionism fared better in these countries than in the United States. Most German unionists belonged to industrial unions allied with the Social Democratic Party. Under Marxist leadership, unions and political party formed a centralized labor movement to maximize labor’s political leverage. English union membership was divided between members of a stable core of craft unions and a growing membership in industrial and regional unions based in mining, cotton textiles, and transportation. Allied with political radicals, these industrial and regional unions formed the backbone of the Labor Party, which held the balance of power in British politics after 1906.

The most radical unions were found in France. By the early 1890s, revolutionary syndicalists controlled the national union center, the Confédération générale du travail (or CGT), which they tried to use as a base for a revolutionary general strike where the workers would seize economic and political power. Consolidating craft unions into industrial and regional unions, the Bourses du travail, syndicalists conducted large strikes designed to demonstrate labor’s solidarity. Paradoxically, the syndicalists’ large strikes were effective because they provoked friendly government mediation. In the United States, state intervention was fatal for labor because government and employers usually united to crush labor radicalism. But in France, officials were more concerned to maintain a center-left coalition with organized labor against reactionary employers opposed to the Third Republic. State intervention helped French unionists to win concessions beyond any they could win with economic leverage. A radical strategy of inclusive industrial and regional unionism could succeed in France because the political leadership of the early Third Republic needed labor’s support against powerful economic and social groups who would replace the Republic with an authoritarian regime. Reminded daily of the importance of republican values and the coalition that sustained the Republic, French state officials promoted collective bargaining and labor unions. Ironically, it was the support of liberal state officials that allowed French union radicalism to succeed, and allowed French unions to grow faster than American unions and to organize the semi-skilled workers in the large establishments of France’s modern industries (Friedman, 1997; 1998).

The AFL and American Exceptionalism

By 1914, unions outside the United States had found that broad organization reduced the availability of strike breakers, advanced labor’s political goals, and could lead to state intervention on behalf of the unions. The United States was becoming exceptional, the only advanced capitalist country without a strong, united labor movement. The collapse of the Knights of Labor cleared the way for the AFL. Formed in 1881 as the Federation of Trade and Labor Unions, the AFL was organized to uphold the narrow interests of craft workers against the general interests of common laborers in the KOL. In practice, AFL-craft unions were little labor monopolies, able to win concessions because of their control over uncommon skills and because their narrow strategy did not frighten state officials. Many early AFL leaders, notably the AFL’s founding president Samuel Gompers and P. J. McGuire of the Carpenters, had been active in radical political movements. But after 1886, they learned to reject political involvements for fear that radicalism might antagonize state officials or employers and provoke repression.

AFL successes in the early twentieth-century

Entering the twentieth century, the AFL appeared to have a winning strategy. Union membership rose sharply in the late 1890s, doubling between 1896 and 1900 and again between 1900 and 1904. Fewer than 5 percent of industrial wage earners belonged to labor unions in 1895, but this share rose to 7 percent in 1900 and 13 percent in 1904, including over 21 percent of industrial wage earners (workers outside of commerce, government, and the professions). Half of coal miners in 1904 belonged to an industrial union (the United Mine Workers of America), but otherwise, most union members belonged to craft organizations, including nearly half the printers, and a third of cigar makers, construction workers and transportation workers. As shown in Table 4, other pockets of union strength included skilled workers in the metal trades, leather, and apparel. These craft unions had demonstrated their economic power, raising wages by around 15 percent and reducing hours worked (Friedman, 1991; Mullin, 1993).

Table 4
Unionization rates by industry in the United States, 1880-2000

Industry 1880 1910 1930 1953 1974 1983 2000
Agriculture Forestry Fishing 0.0 0.1 0.4 0.6 4.0 4.8 2.1
Mining 11.2 37.7 19.8 64.7 34.7 21.1 10.9
Construction 2.8 25.2 29.8 83.8 38.0 28.0 18.3
Manufacturing 3.4 10.3 7.3 42.4 37.2 27.9 14.8
Transportation Communication Utilities 3.7 20.0 18.3 82.5 49.8 46.4 24.0
Private Services 0.1 3.3 1.8 9.5 8.6 8.7 4.8
Public Employment 0.3 4.0 9.6 11.3 38.0 31.1 37.5
All Private 1.7 8.7 7.0 31.9 22.4 18.4 10.9
All 1.7 8.5 7.1 29.6 24.8 20.4 14.1

Note: This table shows the unionization rate, the share of workers belonging to unions, in different industries in the United States, 1880-1996.

Sources: 1880 and 1910: Friedman (1999): 83; 1930: Union membership from Wolman (1936); employment from United States, Bureau of the Census (1932); 1953: Troy (1957); 1974, 1986, 2000: United States, Current Population Survey.

Limits to the craft strategy

Even at this peak, the craft strategy had clear limits. Craft unions succeeded only in a declining part of American industry among workers still performing traditional tasks where training was through apprenticeship programs controlled by the workers themselves. By contrast, there were few unions in the rapidly growing industries employing semi-skilled workers. Nor was the AFL able to overcome racial divisions and state opposition to organize in the South (Friedman, 2000; Letwin, 1998). Compared with the KOL in the early 1880s, or with France’s revolutionary syndicalist unions, American unions were weak in steel, textiles, chemicals, paper and metal fabrication using technologies without traditional craft skills. AFL strongholds included construction, printing, cigar rolling, apparel cutting and pressing, and custom metal engineering, employed craft workers in relatively small establishments little changed from 25 years earlier (see Table 4).

Dependent on skilled craftsmen’s economic leverage, the AFL was poorly organized to battle large, technologically dynamic corporations. For a brief time, the revolutionary International Workers of the World (IWW), formed in 1905, organized semi-skilled workers in some mass production industries. But by 1914, it too had failed. It was state support that forced powerful French employers to accept unions. Without such assistance, no union strategy could force large American employers to accept unions.

Unions in the World War I Era

The AFL and World War I

For all its limits, it must be acknowledged that the AFL and its craft affiliates survived after their rivals ignited and died. The AFL formed a solid union movement among skilled craftsmen that with favorable circumstances could form the core of a broader union movement like what developed in Europe after 1900. During World War I, the Wilson administration endorsed unionization and collective bargaining in exchange for union support for the war effort. AFL affiliates used state support to organize mass-production workers in shipbuilding, metal fabrication, meatpacking and steel doubling union membership between 1915 and 1919. But when Federal support ended after the war’s end, employers mobilized to crush the nascent unions. The post-war union collapse has been attributed to the AFL’s failings. The larger truth is that American unions needed state support to overcome the entrenched power of capital. The AFL did not fail because of its deficient economic strategy; it failed because it had an ineffective political strategy (Friedman, 1998; Frank, 1994; Montgomery, 1987).

International effects of World War I

War gave labor extraordinary opportunities. Combatant governments rewarded pro-war labor leaders with positions in the expanded state bureaucracy and support for collective bargaining and unions. Union growth also reflected economic conditions when wartime labor shortages strengthened the bargaining position of workers and unions. Unions grew rapidly during and immediately after the war. British unions, for example, doubled their membership between 1914 and 1920, to enroll eight million workers, almost half the nonagricultural labor force (Bain and Price, 1980; Visser, 1989). Union membership tripled in Germany and Sweden, doubled in Canada, Denmark, the Netherlands, and Norway, and almost doubled in the United States (see Table 5 and Table 1). For twelve countries, membership grew by 121 percent between 1913 and 1920, including 119 percent growth in seven combatant countries and 160 percent growth in five neutral states.

Table 5
Impact of World War I on Union Membership Growth
Membership Growth in Wartime and After

12 Countries 7 Combatants 5 Neutrals
War-Time 1913 12 498 000 11 742 000 756 000
1920 27 649 000 25 687 000 1 962 000
Growth 1913-20: 121% 119% 160%
Post-war 1920 27 649 000
1929 18 149 000
Growth 1920-29: -34%

Shift toward the revolutionary left

Even before the war, frustration with the slow pace of social reform had led to a shift towards the revolutionary socialist and syndicalist left in Germany, the United Kingdom, and the United States (Nolan, 1981; Montgomery, 1987). In Europe, frustrations with rising prices, declining real wages and working conditions, and anger at catastrophic war losses fanned the flames of discontent into a raging conflagration. Compared with pre-war levels, the number of strikers rose ten or even twenty times after the war, including 2.5 million strikers in France in 1919 and 1920, compared with 200,000 strikers in 1913, 13 million German strikers, up from 300,000 in 1913, and 5 million American strikers, up from under 1 million in 1913. British Prime Minister Lloyd George warned in March 1919 that “The whole of Europe is filled with the spirit of revolution. There is a deep sense not only of discontent, but of anger and revolt among the workmen . . . The whole existing order in its political, social and economic aspects is questioned by the masses of the population from one end of Europe to the other” (quoted in Cronin, 1983: 22).

Impact of Communists

Inspired by the success of the Bolshevik revolution in Russia, revolutionary Communist Parties were organized throughout the world to promote revolution by organizing labor unions, strikes, and political protest. Communism was a mixed blessing for labor. The Communists included some of labor’s most dedicated activists and organizers who contributed greatly to union organization. But Communist help came at a high price. Secretive, domineering, intolerant of opposition, the Communists divided unions between their dwindling allies and a growing collection of outraged opponents. Moreover, they galvanized opposition, depriving labor of needed allies among state officials and the liberal bourgeoisie.

The “Lean Years”: Welfare Capitalism and the Open Shop

Aftermath of World War I

As with most great surges in union membership, the postwar boom was self-limiting. Helped by a sharp post- war economic contraction, employers and state officials ruthlessly drove back the radical threat, purging their workforce of known union activists and easily absorbing futile strikes during a period of rising unemployment. Such campaigns drove membership down by a third from a 1920 peak of 26 million members in eleven countries in 1920 to fewer than 18 million in 1924. In Austria, France, Germany, and the United States, labor unrest contributed to the election of conservative governments; in Hungary, Italy, and Poland it led to the installation of anti- democratic dictatorships that ruthlessly crushed labor unions. Economic stagnation, state repression, and anti-union campaigns by employers prevented any union resurgence through the rest of the 1920s. By 1929, unions in these eleven countries had added only 30,000 members, one-fifth of one percent.

Injunctions and welfare capitalism

The 1920s was an especially dark period for organized labor in the United States where weaknesses visible before World War I became critical failures. Labor’s opponents used fear of Communism to foment a post-war red scare that targeted union activists for police and vigilante violence. Hundreds of foreign-born activists were deported, and mobs led by the American Legion and the Ku Klux Klan broke up union meetings and destroyed union offices (see, for example, Frank, 1994: 104-5). Judges added law to the campaign against unions. Ignoring the intent of the Clayton Anti-Trust Act (1914) they used anti-trust law and injunctions against unions, forbidding activists from picketing or publicizing disputes, holding signs, or even enrolling new union members. Employers competed for their workers’ allegiance, offering paternalist welfare programs and systems of employee representation as substitutes for independent unions. They sought to build a nonunion industrial relations system around welfare capitalism (Cohen, 1990).

Stagnation and decline

After the promises of the war years, the defeat of postwar union drives in mass production industries like steel and meatpacking inaugurated a decade of union stagnation and decline. Membership fell by a third between 1920 and 1924. Unions survived only in the older trades where employment was usually declining. By 1924, they were almost completely eliminated from the dynamic industries of the second industrial revolution: including steel, automobiles, consumer electronics, chemicals and rubber manufacture.

New Deals for Labor

Great Depression

The nonunion industrial relations system of the 1920s might have endured and produced a docile working class organized in company unions (Brody, 1985). But the welfare capitalism of the 1920s collapsed when the Great Depression of the 1930s exposed its weaknesses and undermined political support for the nonunion, open shop. Between 1929 and 1933, real national income in the United States fell by one third, nonagricultural employment fell by a quarter, and unemployment rose from under 2 million in 1929 to 13 million in 1933, a quarter of the civilian labor force. Economic decline was nearly as great elsewhere, raising unemployment to over 15 percent in Austria, Canada, Germany, and the United Kingdom (Maddison, 1991: 260-61). Only the Soviet Union, with its authoritarian political economy was largely spared the scourge of unemployment and economic collapse — a point emphasized by Communists throughout the 1930s and later. Depression discredited the nonunion industrial relations system by forcing welfare capitalists to renege on promises to stabilize employment and to maintain wages. Then, by ignoring protests from members of employee representation plans, welfare capitalists further exposed the fundamental weakness of their system. Lacking any independent support, paternalist promises had no standing but depended entirely on the variable good will of employers. And sometimes that was not enough (Cohen, 1990).

Depression-era political shifts

Voters, too, lost confidence in employers. The Great Depression discredited the old political economy. Even before Franklin Roosevelt’s election as President of the United States in 1932, American states enacted legislation restricting the rights of creditors and landlords, restraining the use of the injunction in labor disputes, and providing expanded relief for the unemployed (Ely, 1998; Friedman, 2001). European voters abandoned centrist parties, embracing extremists of both left and right, Communists and Fascists. In Germany, the Nazis won, but Popular Front governments uniting Communists and socialists with bourgeois liberals assumed power in other countries, including Sweden, France and Spain. (The Spanish Popular Front was overthrown by a Fascist rebellion that installed a dictatorship led by Francisco Franco.) Throughout there was an impulse to take public control over the economy because free market capitalism and orthodox finance had led to disaster (Temin, 1990).

Economic depression lowers union membership when unemployed workers drop their membership and employers use their stronger bargaining position to defeat union drives (Bain and Elsheikh, 1976). Indeed, union membership fell with the onset of the Great Depression but, contradicting the usual pattern, membership rebounded sharply after 1932 despite high unemployment, rising by over 76 percent in ten countries by 1938 (see Table 6 and Table 1). The fastest growth came in countries with openly pro-union governments. In France, where the Socialist Léon Blum led a Popular Front government, and the United States, during Franklin Roosevelt’s New Deal, membership rose by 160 percent 1933-38. But membership grew by 33 percent in eight other countries even without openly pro-labor governments.

Table 6
Impact of the Great Depression and World War II on Union Membership Growth

11 Countries (no Germany) 10 Countries (no Austria)
Depression 1929 12 401 000 11 508 000
1933 11 455 000 10 802 000
Growth 1929-33 -7.6% -6.1%
Popular Front Period 1933 10 802 000
1938 19 007 000
Growth 1933-38 76.0%
Second World War 1938 19 007 000
1947 35 485 000
Growth 1938-47 86.7%

French unions and the Matignon agreements

French union membership rose from under 900,000 in 1935 to over 4,500,000 in 1937. The Popular Front’s victory in the elections of June 1936 precipitated a massive strike wave and the occupation of factories and workplaces throughout France. Remembered in movie, song and legend, the factory occupations were a nearly spontaneous uprising of French workers that brought France’s economy to a halt. Contemporaries were struck by the extraordinarily cheerful feelings that prevailed, the “holiday feeling” and sense that the strikes were a new sort of non-violent revolution that would overturn hierarchy and replace capitalist authoritarianism with true social democracy (Phillippe and Dubief, 1993: 307-8). After Blum assumed office, he brokered the Matignon agreements, named after the premier’s official residence in Paris. Union leaders and heads of France’s leading employer associations agreed to end the strikes and occupations in exchange for wage increases of around 15 percent, a 40 hour workweek, annual vacations, and union recognition. Codified in statute by the Popular Front government, French unions gained new rights and protections from employer repression. Only then did workers flock into unions. In a few weeks, French unions gained four million members with the fastest growth in the new industries of the second industrial revolution. Unions in metal fabrication and chemicals grew by 1,450 percent and 4,000 percent respectively (Magraw, 1992: 2, 287-88).

French union leader Léon Jouhaux hailed the Matignon agreements as “the greatest victory of the workers’ movement.” It included lasting gains, including annual vacations and shorter workweeks. But Simone Weil described the strikers of May 1936 as “soldiers on leave,” and they were soon returned to work. Regrouping, employers discharged union activists and attacked the precarious unity of the Popular Front government. Fighting an uphill battle against renewed employer resistance, the Popular Front government fell before it could build a new system of cooperative industrial relations. Contained, French unions were unable to maintain their momentum towards industrial democracy. Membership fell by a third in 1937-39.

The National Industrial Recovery Act

A different union paradigm was developed in the United States. Rather than vehicles for a democratic revolution, the New Deal sought to integrate organized labor into a reformed capitalism that recognized capitalist hierarchy in the workplace, using unions only to promote macroeconomic stabilization by raising wages and consumer spending (Brinkley, 1995). Included as part of a program for economic recovery was section 7(a) of the National Industrial Recovery Act (NIRA) giving “employees . . . the right to organize and bargain collectively through representatives of their own choosing . . . free from the interference, restraint, or coercion of employers.” AFL-leader William Green pronounced this a “charter of industrial freedom” and workers rushed into unions in a wave unmatched since the Knights of Labor in 1886. As with the KOL, the greatest increase came among the unskilled. Coal miners, southern textile workers, northern apparel workers, Ohio tire makers, Detroit automobile workers, aluminum, lumber and sawmill workers all rushed into unions. For the first time in fifty years, American unions gained a foothold in mass production industries.

AFL’s lack of enthusiasm

Promises of state support brought common laborers into unions. But once there, the new unionists received little help from aging AFL leaders. Fearing that the new unionists’ impetuous zeal and militant radicalism would provoke repression, AFL leaders tried to scatter the new members among contending craft unions with archaic craft jurisdictions. The new unionists were swept up in the excitement of unity and collective action but a half-century of experience had taught the AFL’s leadership to fear such enthusiasms.

The AFL dampened the union boom of 1933-34, but, again, the larger problem was not with the AFL’s flawed tactics but with its lack of political leverage. Doing little to enforce the promises of Section 7(a), the Federal government left employers free to ignore the law. Some flatly prohibited union organization; others formally honored the law but established anemic employee representation plans while refusing to deal with independent unions (Irons, 2000). By 1935 almost as many industrial establishments had employer-dominated employee- representation plans (27 percent) as had unions (30 percent). The greatest number had no labor organization at all (43 percent).

Birth of the CIO

Implacable management resistance and divided leadership killed the early New Deal union surge. It died even before the NIRA was ruled unconstitutional in 1935. Failure provoked rebellion within the AFL. Led by John L. Lewis of the United Mine Workers, eight national unions launched a campaign for industrial organization as the Committee for Industrial Organization. After Lewis punched Carpenter’s Union leader William L Hutcheson on the floor of the AFL convention in 1935, the Committee became an independent Congress of Industrial Organization (CIO). Including many Communist activists, CIO committees fanned out to organize workers in steel, automobiles, retail trade, journalism and other industries. Building effectively on local rank and file militancy, including sitdown strikes in automobiles, rubber, and other industries, the CIO quickly won contracts from some of the strongest bastions of the open shop, including United States Steel and General Motors (Zieger, 1995).

The Wagner Act

Creative strategy and energetic organizing helped. But the CIO owed its lasting success to state support. After the failure of the NIRA, New Dealers sought another way to strengthen labor as a force for economic stimulus. This led to the enactment in 1935 of the National Labor Relations Act, also known as the “Wagner Act.” The Wagner Act established a National Labor Relations Board charged to enforce employees’ “right to self-organization, to form, join, or assist labor organizations to bargain collectively through representatives of their own choosing and to engage in concerted activities for the purpose of collective bargaining or other mutual aid or protection.” It provided for elections to choose union representation and required employers to negotiate “in good faith” with their workers’ chosen representatives. Shifting labor conflict from strikes to elections and protecting activists from dismissal for their union work, the Act lowered the cost to individual workers of supporting collective action. It also put the Federal government’s imprimatur on union organization.

Crucial role of rank-and-file militants and state government support

Appointed by President Roosevelt, the first NLRB was openly pro-union, viewing the Act’s preamble as mandate to promote organization. By 1945 the Board had supervised 24,000 union elections involving some 6,000,000 workers, leading to the unionization of nearly 5,000,000 workers. Still, the NLRB was not responsible for the period’s union boom. The Wagner Act had no direct role in the early CIO years because it was ignored for two years until its constitutionality was established by the Supreme Court in National Labor Relations Board v. Jones and Laughlin Steel Company (1937). Furthermore, the election procedure’s gross contribution of 5,000,000 members was less than half of the period’s net union growth of 11,000,000 members. More important than the Wagner Act were crucial union victories over prominent open shop employers in cities like Akron, Ohio, Flint, Michigan, and among Philadelphia-area metal workers. Dedicated rank-and-file militants and effective union leadership were crucial in these victories. As important was the support of pro-New Deal local and state governments. The Roosevelt landslides of 1934 and 1936 brought to office liberal Democratic governors and mayors who gave crucial support to the early CIO. Placing a right to collective bargaining above private property rights, liberal governors and other elected officials in Michigan, Ohio, Pennsylvania and elsewhere refused to send police to evict sit-down strikers who had seized control of factories. This state support allowed the minority of workers who actively supported unionization to use force to overcome the passivity of the majority of workers and the opposition of the employers. The Open Shop of the 1920s was not abandoned; it was overwhelmed by an aggressive, government-backed labor movement (Gall, 1999; Harris, 2000).

World War II

Federal support for union organization was also crucial during World War II. Again, war helped unions both by eliminating unemployment and because state officials supported unions to gain support for the war effort. Established to minimize labor disputes that might disrupt war production, the National War Labor Board instituted a labor truce where unions exchanged a no-strike pledge for employer recognition. During World War II, employers conceded union security and “maintenance of membership” rules requiring workers to pay their union dues. Acquiescing to government demands, employers accepted the institutionalization of the American labor movement, guaranteeing unions a steady flow of dues to fund an expanded bureaucracy, new benefit programs, and even to raise funds for political action. After growing from 3.5 to 10.2 million members between 1935 and 1941, unions added another 4 million members during the war. “Maintenance of membership” rules prevented free riders even more effectively than had the factory takeovers and violence of the late-1930s. With millions of members and money in the bank, labor leaders like Sidney Hillman and Phillip Murray had the ear of business leaders and official Washington. Large, established, and respected: American labor had made it, part of a reformed capitalism committed to both property and prosperity.

Even more than the First World War, World War Two promoted unions and social change. A European civil war, the war divided the continent not only between warring countries but within countries between those, usually on the political right, who favored fascism over liberal parliamentary government and those who defended democracy. Before the war, left and right contended over the appeasement of Nazi Germany and fascist Italy; during the war, many businesses and conservative politicians collaborated with the German occupation against a resistance movement dominated by the left. Throughout Europe, victory over Germany was a triumph for labor that led directly to the entry into government of socialists and Communists.

Successes and Failures after World War II

Union membership exploded during and after the war, nearly doubling between 1938 and 1946. By 1947, unions had enrolled a majority of nonagricultural workers in Scandinavia, Australia, and Italy, and over 40 percent in most other European countries (see Table 1). Accumulated depression and wartime grievances sparked a post- war strike wave that included over 6 million strikers in France in 1948, 4 million in Italy in 1949 and 1950, and 5 million in the United States in 1946. In Europe, popular unrest led to a dramatic political shift to the left. The Labor Party government elected in the United Kingdom in 1945 established a new National Health Service, and nationalized mining, the railroads, and the Bank of England. A center-left post-war coalition government in France expanded the national pension system and nationalized the Bank of France, Renault, and other companies associated with the wartime Vichy regime. Throughout Europe, the share of national income devoted to social services jumped dramatically, as did the share of income going to the working classes.

Europeans unions and the state after World War II

Unions and the political left were stronger everywhere throughout post-war Europe, but in some countries labor’s position deteriorated quickly. In France, Italy, and Japan, the popular front uniting Communists, socialists, and bourgeois liberals dissolved, and labor’s management opponents recovered state support, with the onset of the Cold War. In these countries, union membership dropped after 1947 and unions remained on the defensive for over a decade in a largely adversarial industrial relations system. Elsewhere, notably in countries with weak Communist movements, such as in Scandinavia but also in Austria, Germany, and the Netherlands, labor was able to compel management and state officials to accept strong and centralized labor movements as social partners. In these countries, stable industrial relations allowed cooperation between management and labor to raise productivity and to open new markets for national companies. High-union-density and high-union-centralization allowed Scandinavian and German labor leaders to negotiate incomes policies with governments and employers restraining wage inflation in exchange for stable employment, investment, and wages linked to productivity growth. Such policies could not be instituted in countries with weaker and less centralized labor movements, including France, Italy, Japan, the United Kingdom and the United States because their unions had not been accepted as bargaining partners by management and they lacked the centralized authority to enforce incomes policies and productivity bargains (Alvarez, Garrett, and Lange, 1992).

Europe since the 1960s

Even where European labor was the weakest, in France or Italy in the 1950s, unions were stronger than before World War II. Working with entrenched socialist and labor political parties, European unions were able to maintain high wages, restrictions on managerial autonomy, and social security. The wave of popular unrest in the late 1960s and early 1970s would carry most European unions to new heights, briefly bringing membership to over 50 percent of the labor force in the United Kingdom and in Italy, and bringing socialists into the government in France, Germany, Italy, and the United Kingdom. Since 1980, union membership has declined some and there has been some retrenchment in the welfare state. But the essentials of European welfare states and labor relations have remained (Western, 1997; Golden and Pontusson, 1992).

Unions begin to decline in the US

It was after World War II that American Exceptionalism became most valid, when the United States emerged as the advanced, capitalist democracy with the weakest labor movement. The United States was the only advanced capitalist democracy where unions went into prolonged decline right after World War II. At 35 percent, the unionization rate in 1945 was the highest in American history, but even then it was lower than in most other advanced capitalist economies. It has been falling since. The post-war strike wave, including three million strikers in 1945 and five million in 1946, was the largest in American history but it did little to enhance labor’s political position or bargaining leverage. Instead, it provoked a powerful reaction among employers and others suspicious of growing union power. A concerted drive by the CIO to organize the South, “Operation Dixie,” failed dismally in 1946. Unable to overcome private repression, racial divisions, and the pro-employer stance of southern local and state governments, the CIO’s defeat left the South as a nonunion, low-wage domestic enclave and a bastion of anti- union politics (Griffith, 1988). Then, in 1946, a conservative Republican majority was elected to Congress, dashing hopes for a renewed, post-war New Deal.

The Taft-Hartley Act and the CIO’s Expulsion of Communists

Quickly, labor’s wartime dreams turned to post-war nightmares. The Republican Congress amended the Wagner Act, enacting the Taft-Hartley Act in 1947 to give employers and state officials new powers against strikers and unions. The law also required union leaders to sign a non-Communist affidavit as a condition for union participation in NLRB-sponsored elections. This loyalty oath divided labor during a time of weakness. With its roots in radical politics and an alliance of convenience between Lewis and the Communists, the CIO was torn by the new Red Scare. Hoping to appease the political right, the CIO majority in 1949 expelled ten Communist-led unions with nearly a third of the organization’s members. This marked the end of the CIO’s expansive period. Shorn of its left, the CIO lost its most dynamic and energetic organizers and leaders. Worse, it plunged the CIO into a civil war; non-Communist affiliates raided locals belonging to the “communist-led” unions fatally distracting both sides from the CIO’s original mission to organize the unorganized and empower the dispossessed. By breaking with the Communists, the CIO’s leadership signaled that it had accepted its place within a system of capitalist hierarchy. Little reason remained for the CIO to remain independent. In 1955 it merged with the AFL to form the AFL-CIO.

The Golden Age of American Unions

Without the revolutionary aspirations now associated with the discredited Communists, America’s unions settled down to bargain over wages and working conditions without challenging such managerial prerogatives as decisions about prices, production, and investment. Some labor leaders, notably James Hoffa of the Teamsters but also local leaders in construction and service trades, abandoned all higher aspirations to use their unions for purely personal financial gain. Allying themselves with organized crime, they used violence to maintain their power over employers and their own rank-and-file membership. Others, including former-CIO leaders, like Walter Reuther of the United Auto Workers, continued to push the envelope of legitimate bargaining topics, building challenges to capitalist authority at the workplace. But even the UAW was unable to force major managerial prerogatives onto the bargaining table.

The quarter century after 1950 formed a ‘golden age’ for American unions. Established unions found a secure place at the bargaining table with America’s leading firms in such industries as autos, steel, trucking, and chemicals. Contracts were periodically negotiated providing for the exchange of good wages for cooperative workplace relations. Rules were negotiated providing a system of civil authority at work, with negotiated regulations for promotion and layoffs, and procedures giving workers opportunities to voice grievances before neutral arbitrators. Wages rose steadily, by over 2 percent per year and union workers earned a comfortable 20 percent more than nonunion workers of similar age, experience and education. Wages grew faster in Europe but American wages were higher and growth was rapid enough to narrow the gap between rich and poor, and between management salaries and worker wages. Unions also won a growing list of benefit programs, medical and dental insurance, paid holidays and vacations, supplemental unemployment insurance, and pensions. Competition for workers forced many nonunion employers to match the benefit packages won by unions, but unionized employers provided benefits worth over 60 percent more than were given nonunion workers (Freeman and Medoff, 1984; Hirsch and Addison, 1986).

Impact of decentralized bargaining in the US

In most of Europe, strong labor movements limit the wage and benefit advantages of union membership by forcing governments to extend union gains to all workers in an industry regardless of union status. By compelling nonunion employers to match union gains, this limited the competitive penalty borne by unionized firms. By contrast, decentralized bargaining and weak unions in the United States created large union wage differentials that put unionized firms at a competitive disadvantage, encouraging them to seek out nonunion labor and localities. A stable and vocal workforce with more experience and training did raise unionized firms’ labor productivity by 15 percent or more above the level of nonunion firms and some scholars have argued that unionized workers earn much of their wage gain. Others, however, find little productivity gain for unionized workers after account is taken of greater use of machinery and other nonlabor inputs by unionized firms (compare Freeman and Medoff, 1984 and Hirsch and Addison, 1986). But even unionized firms with higher labor productivity were usually more conscious of the wages and benefits paid to union worker than they were of unionization’s productivity benefits.

Unions and the Civil Rights Movement

Post-war unions remained politically active. European unions were closely associated with political parties, Communists in France and Italy, socialists or labor parties elsewhere. In practice, notwithstanding revolutionary pronouncements, even the Communist’s political agenda came to resemble that of unions in the United States, liberal reform including a commitment to full employment and the redistribution of income towards workers and the poor (Boyle, 1998). Golden age unions have also been at the forefront of campaigns to extend individual rights. The major domestic political issue of the post-war United States, civil rights, was troubling for many unions because of the racist provisions in their own practice. Nonetheless, in the 1950s and 1960s, the AFL-CIO strongly supported the civil rights movement, funded civil rights organizations and lobbied in support of civil rights legislation. The AFL-CIO pushed unions to open their ranks to African-American workers, even at the expense of losing affiliates in states like Mississippi. Seizing the opportunity created by the civil rights movement, some unions gained members among nonwhites. The feminist movement of the 1970s created new challenges for the masculine and sometimes misogynist labor movement. But, here too, the search for members and a desire to remove sources of division eventually brought organized labor to the forefront. The AFL-CIO supported the Equal Rights Amendment and began to promote women to leadership positions.

Shift of unions to the public sector

In no other country have women and members of racial minorities assumed such prominent positions in the labor movement as they have in the United States. The movement of African-American and women to leadership positions in the late-twentieth century labor movement was accelerated by a shift in the membership structure of the United States union movement. Maintaining their strength in traditional, masculine occupations in manufacturing, construction, mining, and transportation, European unions remained predominantly male. Union decline in these industries combined with growth in heavily female public sector employments in the United States led to the femininization of the American labor movement. Union membership began to decline in the private sector in the United States immediately after World War II. Between 1953 and 1983, for example, the unionization rate fell from 42 percent to 28 percent in manufacturing, by nearly half in transportation, and by over half in construction and mining (see Table 4). By contrast, after 1960, public sector workers won new opportunities to form unions. Because women and racial minorities form a disproportionate share of these public sector workers, increasing union membership there has changed the American labor movement’s racial and gender composition. Women comprised only 19 percent of American union members in the mid-1950s but their share rose to 40 percent by the late 1990s. By then, the most unionized workers were no longer the white male skilled craftsmen of old. Instead, they were nurses, parole officers, government clerks, and most of all, school teachers.

Union Collapse and Union Avoidance in the US

Outside the United States, unions grew through the 1970s and, despite some decline since the 1980s, European and Canadian unions remain large and powerful. The United States is different. Union decline since World War II has brought the United States private-sector labor movement down to early twentieth century levels. As a share of the nonagricultural labor force, union membership fell from its 1945 peak of 35 percent down to under 30 percent in the early 1970s. From there, decline became a general rout. In the 1970s, rising unemployment, increasing international competition, and the movement of industry to the nonunion South and to rural areas undermined the bargaining position of many American unions leaving them vulnerable to a renewed management offensive. Returning to pre-New Deal practices, some employers established new welfare and employee representation programs, hoping to lure worker away from unions (Heckscher, 1987; Jacoby, 1997). Others returned to pre-New Deal repression. By the early 1980s, union avoidance had become an industry. Anti-union consultants and lawyers openly counseled employers how to use labor law to evade unions. Findings of employers’ unfair labor practices in violation of the Wagner Act tripled in the 1970s; by the 1980s, the NLRB reinstated over 10,000 workers a year who were illegally discharged for union activity, nearly one for every twenty who voted for a union in an NLRB election (Weiler, 1983). By the 1990s, the unionization rate in the United States fell to under 14 percent, including only 9 percent of the private sector workers and 37 percent of those in the public sector. Unions now have minimal impact on wages or working conditions for most American workers.

Nowhere else have unions collapsed as in the United States. With a unionization rate dramatically below that of other countries, including Canada, the United States has achieved exceptional status (see Table 7). There remains great interest in unions among American workers; where employers do not resist, unions thrive. In the public sector and in some private employers where workers have free choice to join a union, they are as likely as they ever were, and as likely as workers anywhere. In the past, as after 1886 and in the 1920s, when American employers broke unions, they revived when a government committed to workplace democracy sheltered them from employer repression. If we see another such government, we may yet see another union revival.

Table 7
Union Membership Rates for the United States and Six Other Leading Industrial Economies, 1970 to 1990

1970 1980 1990
U.S.: Unionization Rate: All industries 30.0 24.7 17.6
U.S.: Unionization Rate: Manufacturing 41.0 35.0 22.0
U.S.: Unionization Rate: Financial services 5.0 4.0 2.0
Six Countries: Unionization Rate: All industries 37.1 39.7 35.3
Six Countries: Unionization Rate: Manufacturing 38.8 44.0 35.2
Five Countries: Unionization Rate: Financial services 23.9 23.8 24.0
Ratio: U.S./Six Countries: All industries 0.808 0.622 0.499
Ratio: U.S./Six Countries: Manufacturing 1.058 0.795 0.626
Ratio: U.S./Five Countries: Financial services 0.209 0.168 0.083

Note: The unionization rate reported is the number of union members out of 100 workers in the specified industry. The ratio shown is the unionization rate for the United States divided by the unionization rate for the other countries. The six countries are Canada, France, Germany, Italy, Japan, and the United Kingdom. Data on union membership in financial services in France are not available.

Source: Visser (1991): 110.

References

Alvarez, R. Michael, Geoffrey Garrett and Peter Lange. “Government Partisanship, Labor Organization, and Macroeconomic Performance,” American Political Science Review 85 (1992): 539-556.

Ansell, Christopher K. Schism and Solidarity in Social Movements: The Politics of Labor in the French Third Republic. Cambridge: Cambridge University Press, 2001.

Arnesen, Eric, Brotherhoods of Color: Black Railroad Workers and the Struggle for Equality. Cambridge, MA: Harvard University Press, 2001.

Bain, George S., and Farouk Elsheikh. Union Growth and the Business Cycle: An Econometric Analysis. Oxford: Basil Blackwell, 1976.

Bain, George S. and Robert Price. Profiles of Union Growth: A Comparative Statistical Portrait of Eight Countries. Oxford: Basil Blackwell, 1980.

Bernard, Phillippe and Henri Dubief. The Decline of the Third Republic, 1914-1938. Cambridge: Cambridge University Press, 1993.

Blewett, Mary H. Men, Women, and Work: Class, Gender and Protest in the New England Shoe Industry, 1780-1910. Urbana, IL: University of Illinois Press, 1988.

Boyle, Kevin, editor. Organized Labor and American Politics, 1894-1994: The Labor-Liberal Alliance. Albany, NY: State University of New York Press, 1998.

Brinkley, Alan. The End of Reform: New Deal Liberalism in Recession and War. New York: Alfred A. Knopf, 1995.

Brody, David. Workers in Industrial America: Essays on the Twentieth-Century Struggle. New York: Oxford University Press, 1985.

Cazals, Rémy. Avec les ouvriers de Mazamet dans la grève et l’action quotidienne, 1909-1914. Paris: Maspero, 1978.

Cohen, Lizabeth. Making A New Deal: Industrial Workers in Chicago, 1919-1939. Cambridge: Cambridge University Press, 1990.

Cronin, James E. Industrial Conflict in Modern Britain. London: Croom Helm, 1979.

Cronin, James E. “Labor Insurgency and Class Formation.” In Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925, edited by James E. Cronin and Carmen Sirianni. Philadelphia: Temple University Press, 1983. .

Cronin, James E. and Carmen Sirianni, editors. Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925. Philadelphia: Temple University Press, 1983.

Dawley, Alan. Class and Community: The Industrial Revolution in Lynn. Cambridge, MA: Harvard University Press, 1976.

Ely, James W., Jr. The Guardian of Every Other Right: A Constitutional History of Property Rights. New York: Oxford, 1998.

Fink, Leon. Workingmen’s Democracy: The Knights of Labor and American Politics. Urbana, IL: University of Illinois Press, 1983.

Fink, Leon. “The New Labor History and the Powers of Historical Pessimism: Consensus, Hegemony, and the Case of the Knights of Labor.” Journal of American History 75 (1988): 115-136.

Foner, Philip S. Organized Labor and the Black Worker, 1619-1973. New York: International Publishers, 1974.

Foner, Philip S. Women and the American Labor Movement: From Colonial Times to the Eve of World War I. New York: Free Press, 1979.

Frank, Dana. Purchasing Power: Consumer Organizing, Gender, and the Seattle Labor Movement, 1919- 1929. Cambridge: Cambridge University Press, 1994.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald. “Dividing Labor: Urban Politics and Big-City Construction in Late-Nineteenth Century America.” In Strategic Factors in Nineteenth-Century American Economic History, edited by Claudia Goldin and Hugh Rockoff, 447-64. Chicago: University of Chicago Press, 1991.

Friedman, Gerald. “Revolutionary Syndicalism and French Labor: The Rebels Behind the Cause.” French Historical Studies 20 (Spring 1997).

Friedman, Gerald. State-Making and Labor Movements: France and the United States 1876-1914. Ithaca, NY: Cornell University Press, 1998.

Friedman, Gerald. “New Estimates of United States Union Membership, 1880-1914.” Historical Methods 32 (Spring 1999): 75-86.

Friedman, Gerald. “The Political Economy of Early Southern Unionism: Race, Politics, and Labor in the South, 1880-1914.” Journal of Economic History 60, no. 2 (2000): 384-413.

Friedman, Gerald. “The Sanctity of Property in American Economic History” (manuscript, University of Massachusetts, July 2001).

Gall, Gilbert. Pursuing Justice: Lee Pressman, the New Deal, and the CIO. Albany, NY: State University of New York Press, 1999.

Gamson, William A. The Strategy of Social Protest. Homewood, IL: Dorsey Press, 1975.

Geary, Richard. European Labour Protest, 1848-1939. New York: St. Martin’s Press, 1981.

Golden, Miriam and Jonas Pontusson, editors. Bargaining for Change: Union Politics in North America and Europe. Ithaca, NY: Cornell University Press, 1992.

Griffith, Barbara S. The Crisis of American Labor: Operation Dixie and the Defeat of the CIO. Philadelphia: Temple University Press, 1988.

Harris, Howell John. Bloodless Victories: The Rise and Fall of the Open Shop in the Philadelphia Metal Trades, 1890-1940. Cambridge: Cambridge University Press, 2000.

Hattam, Victoria C. Labor Visions and State Power: The Origins of Business Unionism in the United States. Princeton: Princeton University Press, 1993.

Heckscher, Charles C. The New Unionism: Employee Involvement in the Changing Corporation. New York: Basic Books, 1987.

Hirsch, Barry T. and John T. Addison. The Economic Analysis of Unions: New Approaches and Evidence. Boston: Allen and Unwin, 1986.

Hirschman, Albert O. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA, Harvard University Press, 1970.

Hirschman, Albert O. Shifting Involvements: Private Interest and Public Action. Princeton: Princeton University Press, 1982.

Hobsbawm, Eric J. Labouring Men: Studies in the History of Labour. London: Weidenfeld and Nicolson, 1964.

Irons, Janet. Testing the New Deal: The General Textile Strike of 1934 in the American South. Urbana, IL: University of Illinois Press, 2000.

Jacoby, Sanford. Modern Manors: Welfare Capitalism Since the New Deal. Princeton: Princeton University Press, 1997.

Katznelson, Ira and Aristide R. Zolberg, editors. Working-Class Formation: Nineteenth-Century Patterns in Western Europe and the United States. Princeton: Princeton University Press, 1986. Kocka, Jurgen. “Problems of Working-Class Formation in Germany: The Early Years, 1800-1875.” In Working- Class Formation: Nineteenth-Century Patterns in Western Europe and the United States, edited by Ira Katznelson and Aristide R. Zolberg, 279-351. Princeton: Princeton University Press, 1986. Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921. Chapel Hill: University of North Carolina Press, 1998. Maddison, Angus. Dynamic Forces in Capitalist Development: A Long-Run Comparative View. Oxford: Oxford University Press, 1991. Magraw, Roger. A History of the French Working Class, two volumes. London: Blackwell, 1992. Milkman, Ruth. Women, Work, and Protest: A Century of United States Women’s Labor. Boston: Routledge and Kegan Paul, 1985.

Montgomery, David. The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1920. Cambridge: Cambridge University Press, 1987.

Mullin, Debbie Dudley. “The Porous Umbrella of the AFL: Evidence From Late Nineteenth-Century State Labor Bureau Reports on the Establishment of American Unions.” Ph.D. diss., University of Virginia, 1993.

Nolan, Mary. Social Democracy and Society: Working-Class Radicalism in Dusseldorf, 1890-1920. Cambridge: Cambridge University Press, 1981.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1971.

Perlman, Selig. A Theory of the Labor Movement. New York: MacMillan, 1928.

Rachleff, Peter J. Black Labor in the South, 1865-1890. Philadelphia: Temple University Press, 1984.

Roediger, David. The Wages of Whiteness: Race and the Making of the American Working Class. London: Verso, 1991.

Scott, Joan. The Glassworkers of Carmaux: French Craftsmen in Political Action in a Nineteenth-Century City. Cambridge, MA: Harvard University Press, 1974.

Sewell, William H. Jr. Work and Revolution in France: The Language of Labor from the Old Regime to 1848. Cambridge: Cambridge University Press, 1980.

Shorter, Edward and Charles Tilly. Strikes in France, 1830-1968. Cambridge: Cambridge University Press, 1974.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1990.

Thompson, Edward P. The Making of the English Working Class. New York: Vintage, 1966.

Troy, Leo. Distribution of Union Membership among the States, 1939 and 1953. New York: National Bureau of Economic Research, 1957.

United States, Bureau of the Census. Census of Occupations, 1930. Washington, DC: Government Printing Office, 1932.

Visser, Jelle. European Trade Unions in Figures. Boston: Kluwer, 1989.

Voss, Kim. The Making of American Exceptionalism: The Knights of Labor and Class Formation in the Nineteenth Century. Ithaca, NY: Cornell University Press, 1993.

Ware, Norman. The Labor Movement in the United States, 1860-1895: A Study in Democracy. New York: Vintage, 1929.

Washington, Booker T. “The Negro and the Labor Unions.” Atlantic Monthly (June 1913).

Weiler, Paul. “Promises to Keep: Securing Workers Rights to Self-Organization Under the NLRA.” Harvard Law Review 96 (1983).

Western, Bruce. Between Class and Market: Postwar Unionization in the Capitalist Democracies. Princeton: Princeton University Press, 1997.

Whatley, Warren. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17 (1993), 525-58.

Wilentz, Robert Sean. Chants Democratic: New York City and the Rise of the American Working Class, 1788-1850. Oxford: Oxford University Press, 1984.

Wolman, Leo. Ebb and Flow in Trade Unionism. New York: National Bureau of Economic Research, 1936.

Zieger, Robert. The CIO, 1935-1955. Chapel Hill: University of North Carolina Press, 1995.

Zolberg, Aristide. “Moments of Madness.” Politics and Society 2 (Winter 1972): 183-207. 60

Citation: Friedman, Gerald. “Labor Unions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/labor-unions-in-the-united-states/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work': Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

The Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey

Livio Di Matteo, Lakehead University

Introduction1

From a macro perspective, Canadian quantitative economic history is concerned with the collection and construction of historical time series data as well as the study of the performance of broad economic aggregates over time.2 The micro dimension of quantitative economic history focuses on individual and sector responses to economic phenomena.3 In particular, micro economic history is marked by the collection and analysis of data sets rooted in individual economic and social behavior. This approach uses primary historical records like census rolls, probate records, assessment rolls, land records, parish records and company records, to construct sets of socio-economic data used to examine the social and economic characteristics and behavior of those individuals and their society, both cross-sectionally and over time.

The expansion of historical micro-data studies in Canada has been a function of academic demand and supply factors. On the demand side, there has been a desire for more explicit use of economic and social theory in history and micro-data studies that make use of available records on individuals appeal to historians interested in understanding aggregate trends and reaching the micro-underpinnings of the larger macroeconomic and social relationships. For example, in Canada, the late nineteenth century was a period of intermittent economic growth and analyzing how that growth record affected different groups in society requires studies that disaggregate the population into sub-groups. One way of doing this that became attractive in the 1960’s was to collect micro-data samples from relevant census, assessment or probate records.

On the supply side, computers have lowered research costs, making the analysis of large data sets feasible and cost-effective. The proliferation of low cost personal computers, statistical packages and data spread-sheets has led to another revolution in micro-data analysis, as computers are now routinely taken into archives so that data collection, input and analysis can proceed even more efficiently.

In addition, studies using historical micro-data are an area where economic historians trained either as economists or historians have been able to find common ground.4 Many of the pioneering micro-data projects in Canada were conducted by historians with some training in quantitative techniques, much of which was acquired “on the job” by intellectual interest and excitement, rather than as graduate school training. Historians and economists are united by their common analysis of primary micro-data sources and their choice of sophisticated computer equipment, linkage software and statistical packages.

Background to Historical Micro-data Studies in Canadian Economic History

The early stage of historical micro-data projects in Canada attempted to systematically collect and analyze data on a large scale. Many of these micro-data projects crossed the lines between social and economic history, as well as demographic history in the case of French Canada. Path-breaking work by American scholars such as Lee Soltow (1971), Stephan Thernstrom (1973) and Alice Hanson Jones (1980) was an important influence on Canadian work. Their work on wealth and social structure and mobility using census and probate data drew attention to the extent of mobility — geographic, economic and social — that existed in pre-twentieth-century America.

However, Canadian historical micro-data work has been quite distinct from that of the United States, reflecting its separate tradition in economic history. Canada’s history is one of centralized penetration from the east via the Great Lakes-St. Lawrence waterway and the presence of two founding “nations” of European settlers – English and French – which led to strong Protestant and Roman Catholic traditions. Indeed, there was nearly 100 percent membership in the Roman Catholic Church for francophone Quebeckers for much of Canada’s history. As well, there is an economic reliance on natural resources, and a sparse population spread along an east-west corridor in isolated regions that have made Canada’s economic history, politics and institutions quite different from the United States.

The United States, from its early natural resource staples origins, developed a large, integrated internal market that was relatively independent of external economic forces, at least compared with Canada, and this shifted research topics away from trade and towards domestic resource allocation issues. At the level of historical micro-data, American scholars have had access to national micro-data samples for some time, which has not been the case in Canada until recently. Most of the early studies in Canadian micro-data were regional or urban samples drawn from manuscript sources and there has been little work since at a national level using micro-data sources. However, the strong role of the state in Canada has meant a particular richness to those sources that can be accessed and even the Census contains some personal details not available in the U.S. Census, such as religious affiliation. Moreover, earnings data are available in the Canadian census starting some forty years earlier than the United States.

Canadian micro-data studies have examined industry, fertility, urban and rural life, wages and labor markets, women’s work and roles in the economy, immigration and wealth. The data sources include census, probate records, assessment rolls, legal records and contracts, and are used by historians, economists, geographers, sociologists and demographers to study economic history.5 Very often, the primary sources are untapped and there can be substantial gaps in their coverage due to uneven preservation.

A Survey of Micro-data Studies

Early Years in English Canada

The fruits of early work in English Canada were books and papers by Frank Denton and Peter George (1970, 1973), Michael Katz (1975) and David Gagan (1981), among others.6 The Denton and George paper examined the influences on family size and school attendance in Wentworth County, Ontario, using the 1871 Census of Canada manuscripts. But it was Katz and Gagan’s work that generated greater attention among historians. Katz’s Hamilton Project used census, assessment rolls, city directories and other assorted micro-records to describe patterns of life in mid-nineteenth century Hamilton. Gagan’s Peel County Project was a comprehensive social and economic study of Peel County, Ontario, again using a variety of individual records including probate. These studies stimulated discussion and controversy about nineteenth-century wealth, inheritance patterns, and family size and structure.

The Demographic Tradition in French Canada

In French Canada, the pioneering work was the Saguenay Project organized by Gerard Bouchard (1977, 1983, 1992, 1993, 1996, 1998). Beginning in the 1970’s, a large effort has been expended to create a computerized genealogical and demographic data base for the Saguenay and Charlevoix regions of Quebec going back well into the nineteenth century. This data set, known now as the Balsac Register, contains data on 600,000 individuals (140,000 couples) and 2.4 million events (e.g. births, deaths, gender, etc…) with enormous social scientific and human genetic possibilities. The material gathered has been used to examine fertility, marriage patterns, inheritance, agricultural production and literacy, as well as genetic predisposition towards disease and formed the basis for a book spanning the history of population and families in the Saguenay over the period 1858 to 1971.

French Canada has a strong tradition of historical micro-data research rooted in demographic analysis.7 Another project underway since 1969 and associated with Bertrand Desjardins, Hubert Charbonneau, Jacques Légaré and Yves Landry is Le Programme de recherche en démographie historique (P.R.D.H) at the University of Montréal (Charbonneau, 1988; Landry, 1993; Desjardins, 1993). The database will eventually contain details on a million persons and their life events in Quebec between 1608 and 1850.

Industrial Studies

Only for the 1871 census have all of the schedules survived and the industrial schedules of that census have been made machine-readable (Bloomfield, 1986; Borsa and Inwood, 1993). Kris Inwood and Phyllis Wagg (1993) have used the census manuscript industrial schedules to examine the survival of handloom weaving in rural Canada circa 1870 (Inwood and Wagg, 1993). A total of 2,830 records were examined and data on average product, capital and month’s activity utilized. The results show that the demand for woolen homespun was income sensitive and that patterns of weaving by men and women differed with male-headed firms working a greater number of months during the year and more likely to have a second worker.

More recently, using a combination of aggregate capital market data and firm-level data for a sample of Canadian and American steel producers, Ian Keay and Angela Redish (2004) analyze the relationships between capital costs, financial structure, and domestic capital market characteristics. They find that national capital market characteristics and firm specific characteristics were important determinants of twentieth-century U.S. and Canadian steel firms’ financing decisions. Keay (2000) uses information from firms’ balance sheets and income accounts, and industry-specific prices to calculate labor, capital, intermediate input and total factor productivities for a sample of 39 Canadian and 39 American manufacturing firms in nine industries. The firm-level data also allow for the construction of nation, industry and time consistent series, including capital and value added. Inwood and Keay (2005) use establishment-level data describing manufacturers located in 128 border and near-border counties in Michigan, New York, Ohio, Pennsylvania, and Ontario to calculate Canadian relative to U.S. total factor productivity ratios for 25 industries. Their results illustrate that the average U.S. establishment was approximately 7% more efficient than its Canadian counterpart in 1870/71.

Population, Demographics & Fertility

Marvin McInnis (1977) assembled a body of census data on childbearing and other aspects of Upper Canadian households in 1861 and produced a sample of 1200 farm households that was used to examine the relationship between child-bearing and land availability. He found that an abundance of nearby uncultivated land did affect the probability of there being young children in the household but the magnitude of the influence was small. Moreover, the strongest result was that fertility fell as larger cities developed sufficiently close by for there to be a real influence by urban life and culture.

Eric Moore and Brian Osborne (1987) have examined the socio-economic differentials of marital fertility in Kingston. They related religion, birthplace, and age of mother, ethnic origin and occupational status to changes in fertility between 1861and 1881, using a data set of approximately 3000 observations taken from the manuscript census. Their choice of variables allows for the examination of the impact of both economic factors, as well as the importance of cultural attributes. William Marr (1992) took the first reasonably large sample of farm households (2,656) from the 1851-52 Census of Canada West and examined the determinants of fertility. He found fertility differences between older and more newly settled regions were influenced by land availability at the farm level but farm location, with respect to the extent of agricultural development, did not affect fertility when age, birthplace and religion were held constant. Michael Wayne (1998) uses the 1861 Census of Canada to look at the black population of Canada on the eve of the American Civil War. Meanwhile, George Emery (1993) helps provide an assessment of the comprehensiveness and accuracy of aggregate vital statistics in Ontario between 1869 and 1952 by looking at the process of recording vital statistics. Emery and Kevin McQuillan (1988) use case studies to examine mortality in nineteenth-century Ingersoll, Ontario.

Urban and Rural Life

A number of studies have examined urban and rural life. Bettina Bradbury (1984) has analyzed the census manuscripts of two working class Montreal wards, Ste. Anne and St. Jacques, for the years 1861, 1871 and 1881. Random samples of 1/10 of the households in these parts of Montreal were taken for a sample of nearly 11,000 individuals over three decades. The data were used to examine women and wage labor in Montreal. The evidence is that men were the primary wage earners but the wife’s contribution to the family economy was not so much her own wage labor, which was infrequent, but in organizing the economic life of the household and finding alternate sources of support.

Bettina Bradbury, Peter Gossage, Evelyn Kolish and Alan Stewart (1993) and Gossage (1991) have examined marriage contracts in Montreal over the period 1820-1840 and found that, over time, the use of marriage contracts changed, becoming a tool of a propertied minority. As well, a growing proportion of contract signers chose to keep the property of spouses separate rather than “in community.” The movement towards separation was most likely to be found among the wealthy where separate property offered advantages, especially to those engaged in commerce during harsh economic times. Gillian Hamilton (1999) looks at prenuptial contracting behavior in early nineteenth-century Quebec to explore property rights within families and finds that couples signing contracts tended to choose joint ownership of property when wives were particularly important to the household.

Chad Gaffield (1979, 1983, 1987) has examined social, family and economic life in the Eastern Ontario counties of Prescott-Russell, Alfred and Caledonia using aggregate census, as well as manuscript data for the period 1851-1881.8 He has applied the material to studying rural schooling and the economic structure of farm families and found systematic differences between the marriage patterns of Anglophones and Francophone with Francophone tending to marry at a younger average age. Also, land shortages and the diminishing forest frontier created economic difficulties that led to reduced family sizes by 1881. Gaffield’s most significant current research project is his leadership of the Canadian Century Research Infrastructure (CCRI) initiative, one of the country’s largest research projects. The CCRI is creating cross-indexed databases from a century’s worth of national census information, enabling unprecedented understanding of the making of modern Canada. This effort will eventually lead to an integrated set of micro-data resources at a national level comparable to what currently exist for the United States.9

Business Records

Company and business records have also been used as a source of micro-data and insight into economic history. Gillian Hamilton has conducted a number of studies examining contracts, property rights and labor markets in pre-twentieth century Canada. Hamilton (1996, 2000) examines the nature of apprenticing arrangements in Montreal around the turn of the nineteenth century, using apprenticeship contracts from a larger body of notarial records found in Quebec. The principal question addressed is what determined apprenticeship length and when the decline of the institution began? Hamilton finds that the characteristics of both masters and their boys were important and that masters often relied on probationary periods to better gauge a boy’s worth before signing a contract. Probations, all else equal, were associated with shorter contracts.

Ann Carlos and Frank Lewis (1998, 1999, 2001, 2002) access Hudson Bay Company fur trading records to study property rights, competition, and depletion in the eighteenth-century Canadian fur trade and their work represents an important foray into Canadian aboriginal economic history by studying role of aboriginals as consumers. Doug McCalla (2005, 2005, 2001) uses store records from Upper Canada to examine and understand consumer purchases in the early nineteenth century and gain insight into material culture. Barton Hamilton and Mary MacKinnon (1996) use the Canadian Pacific Railway records to study changes between 1903 and 1938 in the composition of job separations, and the probability of separation. The proportion of voluntary departures fell by more than half after World War I. Independent competing risk, piecewise-constant hazard functions for the probabilities of quits and layoffs are estimated. Changes in workforce composition lengthened the average worker’s spell, but a worker with any given set of characteristics was much more likely to be laid off after 1921, although many of these layoffs were only temporary.

MacKinnon (1997) taps into the CPR data again with a constructed sample of 9000 employees hired before 1945 that includes 700 pensioners and finds features of the CPR pension plan are consistent with economic explanations regarding the role of pensions. Long, continuous periods of service were likely to be rewarded and employees in the most responsible positions generally had higher pensions.

MacKinnon (1996) complements published Canadian nominal wage data by constructing a new hourly wage series, developed from firm records, for machinists, helpers, and laborers employed by the Canadian Pacific Railway between 1900 and 1930. This new evidence suggests that real wage growth in Canada was faster than previously believed, and that there were substantial changes in wage inequality. In another contribution, MacKinnon (1990) studies unemployment relief in Canada by examining relief policies and recipients and contrasting the Canadian situation with unemployment insurance in Britain. She finds demographic factors important in explaining who went on relief, with older workers, and those with large families most likely to be on relief for sustained periods. Another unique contribution to historical labor studies is Michael Huberman and Denise Young (1999). They examine a set of individual strike data of 1554 strikes for Canada from 1901 to 1914 and conclude that having international unions did not weaken Canada’s union movement and that they became part of Canada’s industrial relations framework.

The 1891 and 1901 Census

An ongoing project is the 1891 Census of Canada Project at the University of Guelph under Director Kris Inwood, which is making the information of this census available to the research public in a digitized sample of individual records from the 1891 census. The project is hosted by the University of Guelph, with support from the Canadian Foundation for Innovation, the Ontario Innovation Trust and private sector partners. Phase 1 (Ontario) of the project began during the winter of 2003 in association with the College of Arts Canada Research Chair in Rural History. The Ontario project continues until 2007. Phase II began in 2005; it extends data collection to the rest of the country and also creates an integrated national sample. The database includes information returned on a randomly selected 5% of the enumerators’ manuscript pages with each page containing information describing twenty-five people. An additional 5% of census pages for western Canada and several large cities augment the basic sample. Ultimately the database will contain records for more than 350,000 people, bearing in mind that the population of Canada in 1891 was 3.8 million.

The release of the 1901 Census of Canada manuscript census has also spawned numerous micro-data studies. Peter Baskerville and Eric Sager (1995, 1998) have used the 1901 Census to examine unemployment and the work force in late Victorian Canada.10 Baskerville (2001a,b) uses the 1901 census to examine the practice of boarding in Victorian Canada while in another study he uses the 1901 census to examine wealth and religion. Kenneth Sylvester (2001) uses the 1901 census to examine ethnicity and landholding. Alan Green and Mary MacKinnon (2001) use a new sample of individual-level data compiled from the manuscript returns of the 1901 Census of Canada to examine the assimilation of male wage-earning immigrants (mainly from the UK) in Montreal and Toronto. Unlike studies of post-World War II immigrants to Canada, and some recent studies of nineteenth-century immigration to the United States, they find slow assimilation to the earnings levels of native-born English mother-tongue Canadians. Green, MacKinnon and Chris Minns (2005) use 1901 census data to demonstrate that Anglophones and Francophone had very different personal characteristics, so that movement to the west was rarely economically attractive for Francophone. However, large-scale migration into New England fitted French Canadians’ demographic and human capital profile.

Wealth and Inequality

Recent years have also seen the emergence of a body of literature by several contributors on wealth accumulation and distribution in nineteenth-century Canada. This work has provided quantitative measurements of the degree of inequality in wealth holding, as well as its evolution over time. Gilles Paquet and Jean-Pierre Wallot (1976, 1986) have examined the net personal wealth of wealth holders using “les inventaires après déces” (inventories taken after death) in Quebec during the late eighteenth and early nineteenth century. They have suggested that the habitant was indeed a rational economic agent who chose land as a form of wealth not because of inherent conservatism but because information and transactions costs hindered the accumulation of financial assets.

A. Gordon Darroch (1983a, 1983b) has utilized municipal assessment rolls to study wealth inequality in Toronto during the late nineteenth century. Darroch found that inequality among assessed families was such that the top one-fifth of assessed families held at least 65% of all assessed wealth and the poorest 40% never more than 8%, even though inequality did decline between 1871 and 1899. Darroch and Michael Ornstein (1980, 1984) used the 1871 Census to examine ethnicity, occupational structure and family life cycles in Canada. Darroch and Soltow (1992, 1994) research property holding in Ontario using 5,669 individuals the 1871 census manuscripts and find “deep and abiding structures of inequality” accompanied by opportunities for mobility.

Lars Osberg and Fazley Siddiq (1988, 1993) and Siddiq (1988) have examined wealth inequality in Nova Scotia using probated estates from 1871 and 1899. They found a slight shift towards greater inequality in wealth over time and concluded that the prosperity of the 1850-1875 period in Nova Scotia benefited primarily the Halifax- based merchant class. Higher levels of wealth were associated with being a merchant and with living in Halifax, as opposed to the rest of the province. Siddiq and Julian Gwyn (1992) used probate inventories from 1851 and 1871 to study wealth over the period. They again document a greater trend towards inequality, accompanied by rising wealth. In addition, Peter Wardhas collected a set of 196 Nova Scotia probate records for Lunenburg County spanning 1808-1922, as well as a set of poll tax records for the same location between 1791 and 1795.11

Livio Di Matteo and Peter George (1992, 1998) have examined wealth distribution in late nineteenth century Ontario using probate records and assessment roll data for Wentworth County for the years 1872, 1882, 1892 and 1902. They find a rise in average wealth levels up until 1892 and a decline from 1892 to 1902. Whereas the rise in wealth from 1872 to 1892 appears to have accompanied by a trend towards greater equality in wealth distribution, the period 1892 to 1902 marked a return to greater inequality. Di Matteo (1996, 1997, 1998, 2001) uses a set of 3,515 probated decedents for all of Ontario in 1892 to examine the determinants of wealth holding, the wealth of the Irish, inequality and life cycle accumulation. Di Matteo and Herb Emery (2002) use the 1892 Ontario data to examine life insurance holding and the extent of self-insurance as wealth rises. Di Matteo (2004, 2006) uses a newly constructed micro-data set for the Thunder Bay District from 1885-1920 consisting of 1,293 probated decedents to examine wealth and inequality during Canada’s wheat boom era. Di Matteo is currently using Ontario probated decedents from 1902 linked to the 1901 census and combined with previous data from 1892 to examine the impact of religious affiliation on wealth holding.

Wealth and property holding among women has also been a specific topic of research.12 Peter Baskerville (1999) uses probate data to examine wealth holding by women in the cities of Victoria and Hamilton between 1880 and 1901 and finds that they were substantial property owners. The holding of wealth by women in the wake of property legislation is studied by Inwood and Sue Ingram (2000) and Inwood and Sarah Van Sligtenhorst (2004). Their work chronicles the increase in female property holding in the wake of Canadian property law changes in the late nineteenth-century, Inwood and Richard Reid (2001) also use the Canadian Census to examine the relationship between gender and occupational identity.

Conclusion

The flurry of recent activity in Canadian quantitative economic history using census and probate data bodes well for the future. Even the National Archives of Canada has now made digital images of census forms available online as well as other primary records.13 Moreover, projects such as the CCRI and the 1891 Census Project hold the promise of new, integrated data sources for future research on national as opposed to regional micro-data questions. We will be able to see the extent of regional economic development, earnings and convergence at a regional level and from a national perspective. Access to the 1911 and future access to the 1921 Census of Canada will also provide fertile areas for research and discovery. The period between 1900 and 1921, spanning the wheat boom and the First World War, is particularly important as it coincides with Canadian industrialization, rapid economic growth and the further expansion of wealth and income at the individual level. Moreover, the access to new samples of micro data may also help shed light on aboriginal economic history during the nineteenth and early twentieth century, as well as the economic progress of women.14 In particular, the economic history of Canada’s aboriginal peoples after the decline of the fur trade and during Canada’s industrialization is an area where micro-data might be useful in illustrating economic trends and conditions.15

References:

Baskerville, Peter A. “Familiar Strangers: Urban Families with Boarders in Canada, 1901.” Social Science History 25, no. 3 (2001): 321-46.

Baskerville, Peter. “Did Religion Matter? Religion and Wealth in Urban Canada at the Turn of the Twentieth Century: An Exploratory Study.” Histoire sociale-Social History XXXIV, no. 67 (2001): 61-96.

Baskerville, Peter A. and Eric W. Sager. “Finding the Work Force in the 1901 Census of Canada.” Histoire sociale-Social History XXVIII, no. 56 (1995): 521-40.

Baskerville, Peter A., and Eric W. Sager. Unwilling Idlers: The Urban Unemployed and Their Families in Late Victorian Canada. Toronto: University of Toronto Press, 1998

Baskerville, Peter A. “Women and Investment in Late-Nineteenth Century Urban Canada: Victoria and Hamilton, 1880-1901.” Canadian Historical Review 80, no. 2 (1999): 191-218.

Borsa, Joan, and Kris Inwood. Codebook and Interpretation Manual for the 1870-71 Canadian Industrial Database. Guelph, 1993.

Bouchard, Gerard. “Introduction à l’étude de la société saguenayenne aux XIXe et XXe siècles.” Revue d’histoire de l’Amérique française 31, no. 1 (1977): 3-27.

Bouchard, Gerard. “Les systèmes de transmission des avoirs familiaux et le cycle de la société rurale au Québec, du XVIIe au XXe siècle.” Histoire sociale-Social History XVI, no. 31 (1983): 35-60.

Bouchard, Gerard. “Les fichiers-réseaux de population: Un retour à l’individualité.” Histoire sociale-Social History XXI, no. 42 (1988): 287-94.

Bouchard, Gerard and Regis Thibeault. “Change and Continuity in Saguenay Agriculture: The Evolution of Production and Yields (1852-1971).” In Canadian Papers in Rural History, Vol. VIII, edited Donald H. Akenson, 231-59. Gananoque, ON: Langdale Press, 1992.

Bouchard, Gerard. “Computerized Family Reconstitution and the Measure of Literacy. Presentation of a New Index.” History and Computing 5, no 1 (1993): 12-24.

Bouchard, Gerard. Quelques arpents d’Amérique: Population, économie, famille au Saguenay, 1838-1971. Montreal: Boréal, 1996.

Bouchard, Gerard. “Economic Inequalities in Saguenay Society, 1879-1949: A Descriptive Analysis.” Canadian Historical Review 79, no. 4 (1998): 660-90.

Bourbeau, Robert, and Jacques Légaré. Évolution de la mortalité au Canada et au Québec 1831-1931. Montreal: Les Presses de l’Université de Montréal, 1982.

Bradbury, Bettina. “Women and Wage Labour in a Period of Transition: Montreal, 1861-1881.” Histoire sociale-Social History XVII (1984): 115-31.

Bradbury, Bettina, Peter Gossage, Evelyn Kolish, and Alan Stewart. “Property and Marriage: The Law and the Practice in Early Nineteenth-Century Montreal.” Histoire sociale-Social History XXVI, no. 51 (1993): 9-40.

Carlos, Ann, and Frank Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company, 1700-1763.” In The Other Side of the Frontier: Economic Explanations into Native American History, edited by Linda Barrington. Boulder, CO: Westview Press, 1998.

Carlos, Ann, and Frank Lewis. “Property Rights, Competition, and Depletion in the Eighteenth-century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann, and Frank Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 3 (2002): 285-317.

Carlos, Ann and Frank Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History 61, no. 4 (2001): 1037-64.

Charbonneau, Hubert. “Le registre de population du Québec ancien Bilan de vingt annés de recherches.” Histoire sociale-Social History XXI, no. 42 (1988): 295-99.

Darroch, A. Gordon. “Occupational Structure, Assessed Wealth and Homeowning during Toronto’s Early Industrialization, 1861-1899.” Histoire sociale-Social History XVI (1983): 381-419.

Darroch, A. Gordon. “Early Industrialization and Inequality in Toronto, 1861-1899.” Labour/Le Travailleur 11 (1983): 31-61.

Darroch, A. Gordon. “A Study of Census Manuscript Data for Central Ontario, 1861-1871: Reflections on a Project and On Historical Archives.” Histoire sociale-Social History XXI, no. 42 (1988): 304-11.

Darroch, A. Gordon, and Michael Ornstein. “Ethnicity and Occupational Structure in Canada in 1871: The Vertical Mosaic in Historical Perspective.” Canadian Historical Review 61 (1980): 305-33.

Darroch, A. Gordon, and Michael Ornstein. “Family Coresidence in Canada in 1871: Family Life Cycles, Occupations and Networks of Mutual Aid.” Canadian Historical Association Historical Papers (1984): 30-55.

Darroch, A. Gordon, and Lee Soltow. “Inequality in Landed Wealth in Nineteenth-Century Ontario: Structure and Access.” Canadian Review of Sociology and Anthropology 29 (1992): 167-200.

Darroch, A. Gordon, and Lee Soltow. Property and Inequality in Victorian Ontario: Structural Patterns and Cultural Communities in the 1871 Census. Toronto: University of Toronto Press, 1994.

Denton, Frank T., and Peter George. “An Explanatory Statistical Analysis of Some Socio-economic Characteristics of Families in Hamilton, Ontario, 1871.” Histoire sociale-Social History 5 (1970): 16-44.

Denton, Frank T., and Peter George. “The Influence of Socio-Economic Variables on Family Size in Wentworth County, Ontario, 1871: A Statistical Analysis of Historical Micro-data.” Review of Canadian Sociology and Anthropology 10 (1973): 334-45.

Di Matteo, Livio. “Wealth and Inequality on Ontario’s Northwestern Frontier: Evidence from Probate.” Histoire sociale-Social History XXXVIII, no. 75, (2006): 79-104.

Di Matteo, Livio. “Boom and Bust, 1885-1920: Regional Wealth Evidence from Probate Records.” Australian Economic History Review 44, no. 1 (2004): 52-78.

Di Matteo, Livio. “Patterns of Inequality in Late Nineteenth-Century Ontario: Evidence from Census-Linked Probate Data.” Social Science History 25, no. 3 (2001): 347-80.

Di Matteo, Livio. “Wealth Accumulation and the Life Cycle in Economic History: Implications of Alternative Approaches to Micro-Data.” Explorations in Economic History 35 (1998): 296-324.

Di Matteo, Livio. “The Determinants of Wealth and Asset Holding in Nineteenth Century Canada: Evidence from Micro-data.” Journal of Economic History 57, no. 4 (1997): 907-34.

Di Matteo, Livio. “The Wealth of the Irish in Nineteenth-Century Ontario.” Social Science History 20, no. 2 (1996): 209-34.

Di Matteo, Livio, and J.C. Herbert Emery. “Wealth and the Demand for Life Insurance: Evidence from Ontario, 1892.” Explorations in Economic History 39, no. 4 (2002): 446-69.

Di Matteo, Livio, and Peter George. “Patterns and Determinants of Wealth among Probated Decedents in Wentworth County, Ontario, 1872-1902.” Histoire sociale-Social History XXXI, no. 61(1998): 1-34.

Di Matteo, Livio, and Peter George. “Canadian Wealth Inequality in the Late Nineteenth Century: A Study of Wentworth County, Ontario, 1872-1902.” Canadian Historical Review LXXIII, no. 4 (1992): 453-83.

Emery, George N. Facts of Life: The Social Construction of Vital Statistics, Ontario, 1869-1952. Montreal: McGill-Queen’s University Press, 1993.

Emery, George, and Kevin McQuillan. “A Case Study Approach to Ontario Mortality History: The Example of Ingersoll, 1881-1971.” Canadian Studies in Population 15, (1988): 135-58.

Ens, Gerhard. Homeland to Hinterland: The Changing Worlds of the Red River Metis in the Nineteenth Century. Toronto: University of Toronto Press, 1996.

Gaffield, Chad. “Canadian Families in Cultural Context: Hypotheses from the Mid-Nineteenth Century.” Historical Papers, Canadian Historical Association (1979): 48-70.

Gaffield, Chad. “Schooling, the Economy and Rural Society in Nineteenth-Century Ontario.” in Childhood and Family in Canadian History, edited by Joy Parr. Toronto: McClelland and Stewart (1983): 69-92.

Gaffield, Chad. _Language, Schooling and Cultural Conflict: The Origins of the French-Language Controversy in Ontario.” Kingston and Montreal: McGill-Queen’s, 1987.

Gaffield, Chad. “Machines and Minds: Historians and the Emerging Collaboration.” Histoire sociale-Social History XXI, no. 42 (1988): 312-17.

Gagan, David. Hopeful Travellers: Families, Land and Social Change in Mid-Victorian Peel County, Canada West. Toronto: University of Toronto Press, 1981.

Gagan, David. “Some Comments on the Canadian Experience with Historical Databases.” Histoire sociale-Social History XXI, no. 42 (1988): 300-03.

Gossage, Peter. “Family Formation and Age at Marriage at Saint-Hyacinthe, Quebec, 1854-1891.” Histoire sociale-Social History XXIV, no. 47 (1991): 61-84.

Green, Alan, Mary Mackinnon and Chris Minns. “Conspicuous by Their Absence: French Canadians and the Settlement of the Canadian West.” Journal of Economic History 65, no. 3 (2005): 822-49.

Green, Alan, and Mary MacKinnon. “The Slow Assimilation of British Immigrants in Canada: Evidence from Montreal and Toronto, 1901.”Explorations in Economic History 38, no. 3 (2001): 315-38

Green, Alan G., and Malcolm C. Urquhart. “New Estimates of Output Growth in Canada: Measurement and Interpretation.” In Perspectives on Canadian Economic History, edited by Douglas McCalla, 182-199. Toronto: Copp Clark Pitman Ltd., 1987.

Gwyn, Julian, and Fazley K. Siddiq. “Wealth Distribution in Nova Scotia during the Confederation Era, 1851 and 1871.” Canadian Historical Review LXXIII, no. 4 (1992): 435-52.

Hamilton Barton, and Mary MacKinnon. “Quits and Layoffs in Early Twentieth Century Labour Markets.” Explorations in Economic History 21 (1996): 346-66.

Hamilton, Gillian. “The Decline of Apprenticeship in North America: Evidence from Montreal.” Journal of Economic History 60, no. 3 (2000): 627-64.

Hamilton, Gillian. “Property Rights and Transaction Costs in Marriage: Evidence from Prenuptial Contracts.” Journal of Economic History 59, no. 1 (1999): 68-103.

Hamilton, Gillian. “The Market for Montreal Apprentices: Contract Length and Information.”Explorations in Economic History 33, no 4 (1996): 496-523

Hamilton, Michelle, and Kris Inwood. “The Identification of the Aboriginal Population in the 1891 Census of Canada.” Manuscript, University of Guelph, 2006.

Henripin, Jacques. Tendances at facteurs de la fécondité au Canada Bureau fédéral de la Statistique. Ottawa: Bureau fe?de?ral de la statistique, 1968.

Huberman, Michael, and Denise Young. “Cross-Border Unions: Internationals in Canada, 1901-1914.” Explorations in Economic History 36 (1999): 204-31.

Igartua Jose E. “Les bases de donnés historiques: L’expérience canadienne depuis quinze ans – Introduction.” Histoire sociale-Social History XXI, no. 42 (1988): 283-86.

Inwood, Kris, and Phyllis Wagg. “The Survival of Handloom Weaving in Rural Canada circa 1870.” Journal of Economic History 53 (1993): 346-58.

Inwood, Kris, and Sue Ingram, “The Impact of Married Women’s Property Legislation in Victorian Ontario.” Dalhousie Law Journal 23, no. 2 (2000): 405-49.

Inwood, Kris, and Sarah Van Sligtenhorst. “The Social Consequences of Legal Reform: Women and Property in a Canadian Community.” Continuity and Change 19 no. 1 (2004): 165-97.

Inwood, Kris, and Richard Reid. “Gender and Occupational Identity in a Canadian Census.” Historical Methods 32, no. 2 (2001): 57-70.

Inwood, Kris, and Kevin James. “The 1891 Census of Canada.” Cahiers québécois de démographie, forthcoming.

Inwood, Kris, and Ian.Keay. “Bigger Establishments in Thicker Markets: Can We Explain Early Productivity Differentials between Canada and the United States.” Canadian Journal of Economics 38, no. 4 (2005): 1327-63.

Jones, Alice Hanson. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia Press, 1980.

Katz, Michael B. _The People of Hamilton, Canada West: Family and Class in a Mid-nineteenth-century City_. Cambridge: Harvard University Press, 1975.

Keay, Ian. “Canadian Manufacturers’ Relative Productivity Performance: 1907-1990.” Canadian Journal of Economics 44, no. 4 (2000): 1049-68.

Keay, Ian, and Angela Redish. “The Micro-economic Effects of Financial Market Structure: Evidence from Twentieth -century North American Steel Firms.” Explorations in Economic History 41, no. 4 (2004): 377-403.

Landry, Yves. “Fertility in France and New France: The Distinguishing Characteristics of Canadian Behaviour in the Seventeenth and Eighteenth Centuries.” Social Science History 17, no. 4 (1993): 577-92.

Mackinnon, Mary. “Relief Not Insurance: Canadian Unemployment Relief in the 1930s.”Explorations in Economic History 27, no. 1 (1990): 46-83

Mackinnon, Mary. “New Evidence on Canadian Wage Rates, 1900-1930.”

Canadian Journal of Economics XXIX, no.1 (1996): 114-31.

MacKinnon, Mary. “Providing for Faithful Servants: Pensions at the Canadian Pacific Railway.” Social Science History 21, no. 1 (1997): 59-83.

Marr, William. “Micro and Macro Land Availability as a Determinant of Human Fertility in Rural Canada West, 1851.” Social Science History 16 (1992): 583-90.

McCalla, Doug. “Upper Canadians and Their Guns: An Exploration via Country Store Accounts (1808-61).” Ontario History 97 (2005): 121-37.

McCalla, Doug. “A World without Chocolate: Grocery Purchases at Some Upper Canadian Country Stores, 1808-61.” Agricultural History 79 (2005): 147-72.

McCalla, Doug. “Textile Purchases by Some Ordinary Upper Canadians, 1808-1862.” Material History Review 53, (2001): 4-27.

McInnis, Marvin. “Childbearing and Land Availability: Some Evidence from Individual Household Data.” In Population Patterns in the Past, edited by Ronald Demos Lee, 201-27. New York: Academic Press, 1977.

Moore, Eric G., and Brian S. Osborne. “Marital Fertility in Kingston, 1861-1881: A Study of Socio-economic Differentials.” Histoire sociale-Social History XX (1987): 9-27.

Muise, Del. “The Industrial Context of Inequality: Female Participation in Nova Scotia’s Paid Workforce, 1871-1921.” Acadiensis XX, no. 2 (1991).

Myers, Sharon. “‘Not to Be Ranked as Women’: Female Industrial Workers in Halifax at the Turn of the Twentieth Century.” In Separate Spheres: Women’s Worlds in the Nineteenth-Century Maritimes, edited by Janet Guildford and Suzanne Morton, 161-83. Fredericton: Acadiensis Press, 1994.

Osberg, Lars, and Fazley Siddiq. “The Acquisition of Wealth in Nova Scotia in the Late Nineteenth Century.” Research in Economic Inequality 4 (1993): 181-202.

Osberg, Lars, and Fazley Siddiq. “The Inequality of Wealth in Britain’s North American Colonies: The Importance of the Relatively Poor.” Review of Income and Wealth 34 (1988): 143-63.

Paquet, Gilles, and Jean-Pierre Wallot. “Les Inventaires après décès à Montréal au tournant du XIXe siècle: preliminaires à une analyse.” Revue d’histoire de l’Amérique française 30 (1976): 163-221.

Paquet, Gilles, and Jean-Pierre Wallot. “Stratégie Foncière de l’Habitant: Québec (1790-1835).” Revue d’histoire de l’Amérique française 39 (1986): 551-81.

Seager, Allen, and Adele Perry. “Mining the Connections: Class, Ethnicity and Gender in Nanaimo, British Columbia, 1891.” Histoire sociale/Social History 30 , no. 59 (1997): 55-76.

Siddiq, Fazley K. “The Size Distribution of Probate Wealth Holdings in Nova Scotia in the Late Nineteenth Century.” Acadiensis 18 (1988): 136-47.

Soltow, Lee. Patterns of Wealthholding in Wisconsin since 1850. Madison: University of Wisconsin Press, 1971.

Sylvester, Kenneth Michael. “All Things Being Equal: Land Ownership and Ethnicity in Rural Canada, 1901.” Histoire sociale-Social History XXXIV, no. 67 (2001): 35-59.

Thernstrom, Stephan. The Other Bostonians: Poverty and Progress in the American Metropolis, 1880-1970. Cambridge: Harvard University Press, 1973.

Urquhart, Malcolm C. Gross National Product, Canada, 1870-1926: The Derivation of the Estimates. Montreal: McGill-Queens, 1993.

Urquhart, Malcolm C. “New Estimates of Gross National Product Canada, 1870-1926: Some Implications for Canadian Development.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 9-94. Chicago: University of Chicago Press (1986).

Wayne, Michael. “The Black Population of Canada West on the Eve of the American Civil War: A Reassessment Based on the Manuscript Census of 1861.” In A Nation of Immigrants: Women, Workers and Communities in Canadian History, edited by Franca Iacovetta, Paula Draper and Robert Ventresca. Toronto: University of Toronto Press, 1998.

Footnotes

1 The helpful comments of Herb Emery, Mary MacKinnon and Kris Inwood on earlier drafts are acknowledged.

2 See especially Mac Urquhart’s spearheading of the major efforts in national income and output estimates. (Urquhart, 1986, 1993)

3 “Individual response” means by individuals, households and firms.

4 See Gaffield (1988) and Igartua (1988).

5 The Conference on the Use of Census Manuscripts for Historical Research held at Guelph in March 1993 was an example of the interdisciplinary nature of historical micro-data research. The conference was sponsored by the Canadian Committee on History and Computing, the Social Sciences and Humanities Research Council of Canada and the University of Guelph. The conference was organized by economist Kris Inwood and historian Richard Reid and featured presentations by historians, economists, demographers, sociologists and anthropologists.

6 The Denton/George project had its origins in a proposal to the Second Conference on Quantitative Research in Canadian Economic History in 1967 that a sampling of the Canadian census be undertaken. Denton and George drew a sample from the manuscript census returns for individuals for 1871 that had recently been made available, and reported their preliminary findings to the Fourth Conference in March, 1970 in a paper that was published shortly afterwards in Histoire sociale/Social History (1970). Mac Urquhart’s role here must be acknowledged. He and Ken Buckley were insistent that a sampling of Census manuscripts would be an important venture for the conference members to initiate.

7 Also, sources such as the aggregate census have been used to examine fertility by Henripin (1968) and mortality by Bourbeau and Legaré (1982)).

8 Chad Gaffield, Peter Baskerville and Alan Artibise were also involved in the creation of a machine-readable listing of archival sources on Vancouver Island known as the Vancouver Islands Project (Gaffield, 1988, 313).

9 See Chad Gaffield, “Ethics, Technology and Confidential Research Data: The Case of the Canadian Century Research Infrastructure Project,” paper presented to the World History Conference, Sydney, July 3-9, 2005.

10 Baskerville and Sager have been involved in the Canadian Families Project. See “The Canadian Families Project”, a special issue of the journal Historical Methods, 33 no. 4 (2000).

11 See Don Paterson’s Economic and Social History Data Base at the University of British Columbia at http://www2.arts.ubc.ca/econsochistory/data/data_list.html

12 Examples of other aspects of gender and economic status in a regional context ar e covered by Muise (1991), Myers (1994) and Seager and Perry (1997).

13 See http://www.collectionscanada.ca/genealogy/022-500-e.html

14 See for example the work by Gerhard Ens (1996) on the Red River Metis.

15 Hamilton and Inwood (2006) have begun research into identifying the aboriginal population in the 1891 Census of Canada.

Citation: Di Matteo, Livio. “The Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey”. EH.Net Encyclopedia, edited by Robert Whaples. January 27, 2007. URL
http://eh.net/encyclopedia/the-use-of-quantitative-micro-data-in-canadian-economic-history-a-brief-survey/

The Economic Impact of the Black Death

David Routt, University of Richmond

The Black Death was the largest demographic disaster in European history. From its arrival in Italy in late 1347 through its clockwise movement across the continent to its petering out in the Russian hinterlands in 1353, the magna pestilencia (great pestilence) killed between seventeen and twenty—eight million people. Its gruesome symptoms and deadliness have fixed the Black Death in popular imagination; moreover, uncovering the disease’s cultural, social, and economic impact has engaged generations of scholars. Despite growing understanding of the Black Death’s effects, definitive assessment of its role as historical watershed remains a work in progress.

A Controversy: What Was the Black Death?

In spite of enduring fascination with the Black Death, even the identity of the disease behind the epidemic remains a point of controversy. Aware that fourteenth—century eyewitnesses described a disease more contagious and deadlier than bubonic plague (Yersinia pestis), the bacillus traditionally associated with the Black Death, dissident scholars in the 1970s and 1980s proposed typhus or anthrax or mixes of typhus, anthrax, or bubonic plague as the culprit. The new millennium brought other challenges to the Black Death—bubonic plague link, such as an unknown and probably unidentifiable bacillus, an Ebola—like haemorrhagic fever or, at the pseudoscientific fringes of academia, a disease of interstellar origin.

Proponents of Black Death as bubonic plague have minimized differences between modern bubonic and the fourteenth—century plague through painstaking analysis of the Black Death’s movement and behavior and by hypothesizing that the fourteenth—century plague was a hypervirulent strain of bubonic plague, yet bubonic plague nonetheless. DNA analysis of human remains from known Black Death cemeteries was intended to eliminate doubt but inability to replicate initially positive results has left uncertainty. New analytical tools used and new evidence marshaled in this lively controversy have enriched understanding of the Black Death while underscoring the elusiveness of certitude regarding phenomena many centuries past.

The Rate and Structure of mortality

The Black Death’s socioeconomic impact stemmed, however, from sudden mortality on a staggering scale, regardless of what bacillus caused it. Assessment of the plague’s economic significance begins with determining the rate of mortality for the initial onslaught in 1347—53 and its frequent recurrences for the balance of the Middle Ages, then unraveling how the plague chose victims according to age, sex, affluence, and place.

Imperfect evidence unfortunately hampers knowing precisely who and how many perished. Many of the Black Death’s contemporary observers, living in an epoch of famine and political, military, and spiritual turmoil, described the plague apocalyptically. A chronicler famously closed his narrative with empty membranes should anyone survive to continue it. Others believed as few as one in ten survived. One writer claimed that only fourteen people were spared in London. Although sober eyewitnesses offered more plausible figures, in light of the medieval preference for narrative dramatic force over numerical veracity, chroniclers’ estimates are considered evidence of the Black Death’s battering of the medieval psyche, not an accurate barometer of its demographic toll.

Even non—narrative and presumably dispassionate, systematic evidence — legal and governmental documents, ecclesiastical records, commercial archives — presents challenges. No medieval scribe dragged his quill across parchment for the demographer’s pleasure and convenience. With a paucity of censuses, estimates of population and tracing of demographic trends have often relied on indirect indicators of demographic change (e.g., activity in the land market, levels of rents and wages, size of peasant holdings) or evidence treating only a segment of the population (e.g., assignment of new priests to vacant churches, payments by peasants to take over holdings of the deceased). Even the rare census—like record, like England’s Domesday Book (1086) or the Poll Tax Return (1377), either enumerates only heads of households or excludes slices of the populace or ignores regions or some combination of all these. To compensate for these imperfections, the demographer relies on potentially debatable assumptions about the size of the medieval household, the representativeness of a discrete group of people, the density of settlement in an undocumented region, the level of tax evasion, and so forth.

A bewildering array of estimates for mortality from the plague of 1347—53 is the result. The first outbreak of the Black Death indisputably was the deadliest but the death rate varied widely according to place and social stratum. National estimates of mortality for England, where the evidence is fullest, range from five percent, to 23.6 percent among aristocrats holding land from the king, to forty to forty—five percent of the kingdom’s clergy, to over sixty percent in a recent estimate. The picture for the continent likewise is varied. Regional mortality in Languedoc (France) was forty to fifty percent while sixty to eighty percent of Tuscans (Italy) perished. Urban death rates were mostly higher but no less disparate, e.g., half in Orvieto (Italy), Siena (Italy), and Volterra (Italy), fifty to sixty—six percent in Hamburg (Germany), fifty—eight to sixty—eight percent in Perpignan (France), sixty percent for Barcelona’s (Spain) clerical population, and seventy percent in Bremen (Germany). The Black Death was often highly arbitrary in how it killed in a narrow locale, which no doubt broadened the spectrum of mortality rates. Two of Durham Cathedral Priory’s manors, for instance, had respective death rates of twenty—one and seventy—eighty percent (Shrewsbury, 1970; Russell, 1948; Waugh, 1991; Ziegler, 1969; Benedictow, 2004; Le Roy Ladurie, 1976; Bowsky, 1964; Pounds, 1974; Emery, 1967; Gyug, 1983; Aberth, 1995; Lomas, 1989).

Credible death rates between one quarter and three quarters complicate reaching a Europe—wide figure. Neither a casual and unscientific averaging of available estimates to arrive at a probably misleading composite death rate nor a timid placing of mortality somewhere between one and two thirds is especially illuminating. Scholars confronting the problem’s complexity before venturing estimates once favored one third as a reasonable aggregate death rate. Since the early 1970s demographers have found higher levels of mortality plausible and European mortality of one half is considered defensible, a figure not too distant from less fanciful contemporary observations.

While the Black Death of 1347—53 inflicted demographic carnage, had it been an isolated event European population might have recovered to its former level in a generation or two and its economic impact would have been moderate. The disease’s long—term demographic and socioeconomic legacy arose from it recurrence. When both national and local epidemics are taken into account, England endured thirty plague years between 1351 and 1485, a pattern mirrored on the continent, where Perugia was struck nineteen times and Hamburg, Cologne, and Nuremburg at least ten times each in the fifteenth century. Deadliness of outbreaks declined — perhaps ten to twenty percent in the second plague (pestis secunda) of 1361—2, ten to fifteen percent in the third plague (pestis tertia) of 1369, and as low as five and rarely above ten percent thereafter — and became more localized; however, the Black Death’s persistence ensured that demographic recovery would be slow and socioeconomic consequences deeper. Europe’s population in 1430 may have been fifty to seventy—five percent lower than in 1290 (Cipolla, 1994; Gottfried, 1983).

Enumeration of corpses does not adequately reflect the Black Death’s demographic impact. Who perished was equally significant as how many; in other words, the structure of mortality influenced the time and rate of demographic recovery. The plague’s preference for urbanite over peasant, man over woman, poor over affluent, and, perhaps most significantly, young over mature shaped its demographic toll. Eyewitnesses so universally reported disproportionate death among the young in the plague’s initial recurrence (1361—2) that it became known as the Childen’s Plague (pestis puerorum, mortalité des enfants). If this preference for youth reflected natural resistance to the disease among plague survivors, the Black Death may have ultimately resembled a lower—mortality childhood disease, a reality that magnified both its demographic and psychological impact.

The Black Death pushed Europe into a long—term demographic trough. Notwithstanding anecdotal reports of nearly universal pregnancy of women in the wake of the magna pestilencia, demographic stagnancy characterized the rest of the Middle Ages. Population growth recommenced at different times in different places but rarely earlier than the second half of the fifteenth century and in many places not until c. 1550.

The European Economy on the Cusp of the Black Death

Like the plague’s death toll, its socioeconomic impact resists categorical measurement. The Black Death’s timing made a facile labeling of it as a watershed in European economic history nearly inevitable. It arrived near the close of an ebullient high Middle Ages (c. 1000 to c. 1300) in which urban life reemerged, long—distance commerce revived, business and manufacturing innovated, manorial agriculture matured, and population burgeoned, doubling or tripling. The Black Death simultaneously portended an economically stagnant, depressed late Middle Ages (c. 1300 to c. 1500). However, even if this simplistic and somewhat misleading portrait of the medieval economy is accepted, isolating the Black Death’s economic impact from manifold factors at play is a daunting challenge.

Cognizant of a qualitative difference between the high and late Middle Ages, students of medieval economy have offered varied explanations, some mutually exclusive, others not, some favoring the less dramatic, less visible, yet inexorable factor as an agent of change rather than a catastrophic demographic shift. For some, a cooling climate undercut agricultural productivity, a downturn that rippled throughout the predominantly agrarian economy. For others, exploitative political, social, and economic institutions enriched an idle elite and deprived working society of wherewithal and incentive to be innovative and productive. Yet others associate monetary factors with the fourteenth— and fifteenth—century economic doldrums.

The particular concerns of the twentieth century unsurprisingly induced some scholars to view the medieval economy through a Malthusian lens. In this reconstruction of the Middle Ages, population growth pressed against the society’s ability to feed itself by the mid—thirteenth century. Rising impoverishment and contracting holdings compelled the peasant to cultivate inferior, low—fertility land and to convert pasture to arable production and thereby inevitably reduce numbers of livestock and make manure for fertilizer scarcer. Boosting gross productivity in the immediate term yet driving yields of grain downward in the longer term exacerbated the imbalance between population and food supply; redressing the imbalance became inevitable. This idea’s adherents see signs of demographic correction from the mid—thirteenth century onward, possibly arising in part from marriage practices that reduced fertility. A more potent correction came with subsistence crises. Miserable weather in 1315 destroyed crops and the ensuing Great Famine (1315—22) reduced northern Europe’s population by perhaps ten to fifteen percent. Poor harvests, moreover, bedeviled England and Italy to the eve of the Black Death.

These factors — climate, imperfect institutions, monetary imbalances, overpopulation — diminish the Black Death’s role as a transformative socioeconomic event. In other words, socioeconomic changes already driven by other causes would have occurred anyway, merely more slowly, had the plague never struck Europe. This conviction fosters receptiveness to lower estimates of the Black Death’s deadliness. Recent scrutiny of the Malthusian analysis, especially studies of agriculture in source—rich eastern England, has, however, rehabilitated the Black Death as an agent of socioeconomic change. Growing awareness of the use of “progressive” agricultural techniques and of alternative, non—grain economies less susceptible to a Malthusian population—versus—resources dynamic has undercut the notion of an absolutely overpopulated Europe and has encouraged acceptance of higher rates of mortality from the plague (Campbell, 1983; Bailey, 1989).

The Black Death and the Agrarian Economy

The lion’s share of the Black Death’s effect was felt in the economy’s agricultural sector, unsurprising in a society in which, except in the most urbanized regions, nine of ten people eked out a living from the soil.

A village struck by the plague underwent a profound though brief disordering of the rhythm of daily life. Strong administrative and social structures, the power of custom, and innate human resiliency restored the village’s routine by the following year in most cases: fields were plowed, crops were sown, tended, and harvested, labor services were performed by the peasantry, the village’s lord collected dues from tenants. Behind this seeming normalcy, however, lord and peasant were adjusting to the Black Death’s principal economic consequence: a much smaller agricultural labor pool. Before the plague, rising population had kept wages low and rents and prices high, an economic reality advantageous to the lord in dealing with the peasant and inclining many a peasant to cleave to demeaning yet secure dependent tenure.

As the Black Death swung the balance in the peasant’s favor, the literate elite bemoaned a disintegrating social and economic order. William of Dene, John Langland, John Gower, and others polemically evoked nostalgia for the peasant who knew his place, worked hard, demanded little, and squelched pride while they condemned their present in which land lay unplowed and only an immediate pang of hunger goaded a lazy, disrespectful, grasping peasant to do a moment’s desultory work (Hatcher, 1994).

Moralizing exaggeration aside, the rural worker indeed demanded and received higher payments in cash (nominal wages) in the plague’s aftermath. Wages in England rose from twelve to twenty—eight percent from the 1340s to the 1350s and twenty to forty percent from the 1340s to the 1360s. Immediate hikes were sometimes more drastic. During the plague year (1348—49) at Fornham All Saints (Suffolk), the lord paid the pre—plague rate of 3d. per acre for more half of the hired reaping but the rest cost 5d., an increase of 67 percent. The reaper, moreover, enjoyed more and larger tips in cash and perquisites in kind to supplement the wage. At Cuxham (Oxfordshire), a plowman making 2s. weekly before the plague demanded 3s. in 1349 and 10s. in 1350 (Farmer, 1988; Farmer, 1991; West Suffolk Record Office 3/15.7/2.4; Harvey, 1965).

In some instances, the initial hikes in nominal or cash wages subsided in the years further out from the plague and any benefit they conferred on the wage laborer was for a time undercut by another economic change fostered by the plague. Grave mortality ensured that the European supply of currency in gold and silver increased on a per—capita basis, which in turned unleashed substantial inflation in prices that did not subside in England until the mid—1370s and even later in many places on the continent. The inflation reduced the purchasing power (real wage) of the wage laborer so significantly that, even with higher cash wages, his earnings either bought him no more or often substantially less than before the magna pestilencia (Munro, 2003; Aberth, 2001).

The lord, however, was confronted not only by the roving wage laborer on whom he relied for occasional and labor—intensive seasonal tasks but also by the peasant bound to the soil who exchanged customary labor services, rent, and dues for holding land from the lord. A pool of labor services greatly reduced by the Black Death enabled the servile peasant to bargain for less onerous responsibilities and better conditions. At Tivetshall (Norfolk), vacant holdings deprived its lord of sixty percent of his week—work and all his winnowing services by 1350—51. A fifth of winter and summer week—work and a third of reaping services vanished at Redgrave (Suffolk) in 1349—50 due to the magna pestilencia. If a lord did not make concessions, a peasant often gravitated toward any better circumstance beckoning elsewhere. At Redgrave, for instance, the loss of services in 1349—50 directly due to the plague was followed in 1350—51 by an equally damaging wave of holdings abandoned by surviving tenants. For the medieval peasant, never so tightly bound to the manor as once imagined, the Black Death nonetheless fostered far greater rural mobility. Beyond loss of labor services, the deceased or absentee peasant paid no rent or dues and rendered no fees for use of manorial monopolies such as mills and ovens and the lord’s revenues shrank. The income of English lords contracted by twenty percent from 1347 to 1353 (Norfolk Record Office WAL 1247/288×1; University of Chicago Bacon 335—6; Gottfried, 1983).

Faced with these disorienting circumstances, the lord often ultimately had to decide how or even whether the pre—plague status quo could be reestablished on his estate. Not capitalistic in the sense of maximizing productivity for reinvestment of profits to enjoy yet more lucrative future returns, the medieval lord nonetheless valued stable income sufficient for aristocratic ostentation and consumption. A recalcitrant peasantry, diminished dues and services, and climbing wages undermined the material foundation of the noble lifestyle, jostled the aristocratic sense of proper social hierarchy, and invited a response.

In exceptional circumstances, a lord sometimes kept the peasant bound to the land. Because the nobility in Spanish Catalonia had already tightened control of the peasantry before the Black Death, because underdeveloped commercial agriculture provided the peasantry narrow options, and because the labor—intensive demesne agriculture common elsewhere was largely absent, the Catalan lord through a mix of coercion (physical intimidation, exorbitant fees to purchase freedom) and concession (reduced rents, conversion of servile dues to less humiliating fixed cash payments) kept the Catalan peasant in place. In England and elsewhere on the continent, where labor services were needed to till the demesne, such a conservative approach was less feasible. This, however, did not deter some lords from trying. The lord of Halesowen (Worcestershire) not only commanded the servile tenant to perform the full range of services but also resuscitated labor obligations in abeyance long before the Black Death, tantamount to an unwillingness to acknowledge anything had changed (Freedman, 1991; Razi, 1981).

Europe’s political elite also looked to legal coercion not only to contain rising wages and to limit the peasant’s mobility but also to allay a sense of disquietude and disorientation arising from the Black Death’s buffeting of pre—plague social realities. England’s Ordinance of Laborers (1349) and Statute of Laborers (1351) called for a return to the wages and terms of employment of 1346. Labor legislation was likewise promulgated by the Córtes of Aragon and Castile, the French crown, and cities such as Siena, Orvieto, Pisa, Florence, and Ragusa. The futility of capping wages by legislative fiat is evident in the French crown’s 1351 revision of its 1349 enactment to permit a wage increase of one third. Perhaps only in England, where effective government permitted robust enforcement, did the law slow wage increases for a time (Aberth, 2001; Gottfried, 1983; Hunt and Murray, 1999; Cohn, 2007).

Once knee—jerk conservatism and legislative palliatives failed to revivify pre—plague socioeconomic arrangements, the lord cast about for a modus vivendi in a new world of abundant land and scarce labor. A sober triage of the available sources of labor, whether it was casual wage labor or a manor’s permanent stipendiary staff (famuli) or the dependent peasant, led to revision of managerial policy. The abbot of Saint Edmund’s, for example, focused on reconstitution of the permanent staff (famuli) on his manors. Despite mortality and flight, the abbot by and large achieved his goal by the mid—1350s. While labor legislation may have facilitated this, the abbot’s provision of more frequent and lucrative seasonal rewards, coupled with the payment of grain stipends in more valuable and marketable cereals such as wheat, no doubt helped secure the loyalty of famuli while circumventing statutory limits on higher wages. With this core of labor solidified, the focus turned to preserving the most essential labor services, especially those associated with the labor—intensive harvesting season. Less vital labor services were commuted for cash payments and ad hoc wage labor then hired to fill gaps. The cultivation of the demesne continued, though not on the pre—plague scale.

For a time in fact circumstances helped the lord continue direct management of the demesne. The general inflation of the quarter—century following the plague as well as poor harvests in the 1350s and 1360s boosted grain prices and partially compensated for more expensive labor. This so—called “Indian summer” of demesne agriculture ended quickly in the mid—1370s in England and subsequently on the continent when the post—plague inflation gave way to deflation and abundant harvests drove prices for commodities downward, where they remained, aside from brief intervals of inflation, for the rest of the Middle Ages. Recurrences of the plague, moreover, placed further stress on new managerial policies. For the lord who successfully persuaded new tenants to take over vacant holdings, such as happened at Chevington (Suffolk) by the late 1350s, the pestis secunda of 1361—62 often inflicted a decisive blow: a second recovery at Chevington never materialized (West Suffolk Records Office 3/15.3/2.9—2.23).

Under unremitting pressure, the traditional cultivation of the demesne ceased to be viable for lord after lord: a centuries—old manorial system gradually unraveled and the nature of agriculture was transformed. The lord’s earliest concession to this new reality was curtailment of cultivated acreage, a trend that accelerated with time. The 590.5 acres sown on average at Great Saxham (Suffolk) in the late 1330s was more than halved (288.67 acres) in the 1360s, for instance (West Suffolk Record Office, 3/15.14/1.1, 1.7, 1.8).

Beyond reducing the demesne to a size commensurate with available labor, the lord could explore types of husbandry less labor—intensive than traditional grain agriculture. Greater domestic manufacture of woolen cloth and growing demand for meat enabled many English lords to reduce arable production in favor of sheep—raising, which required far less labor. Livestock husbandry likewise became more significant on the continent. Suitable climate, soil, and markets made grapes, olives, apples, pears, vegetables, hops, hemp, flax, silk, and dye—stuffs attractive alternatives to grain. In hope of selling these cash crops, rural agriculture became more attuned to urban demand and urban businessmen and investors more intimately involved in what and how much of it was grown in the countryside (Gottfried, 1983; Hunt and Murray, 1999).

The lord also looked to reduce losses from demesne acreage no longer under the plow and from the vacant holdings of onetime tenants. Measures adopted to achieve this end initiated a process that gained momentum with each passing year until the face of the countryside was transformed and manorialism was dead. The English landlord, hopeful for a return to the pre—plague regime, initially granted brief terminal leases of four to six years at fixed rates for bits of demesne and for vacant dependent holdings. Leases over time lengthened to ten, twenty, thirty years, or even a lifetime. In France and Italy, the lord often resorted to métayage or mezzadria leasing, a type of sharecropping in which the lord contributed capital (land, seed, tools, plow teams) to the lessee, who did the work and surrendered a fraction of the harvest to the lord.

Disillusioned by growing obstacles to profitable cultivation of the demesne, the lord, especially in the late fourteenth century and the early fifteenth, adopted a more sweeping type of leasing, the placing of the demesne or even the entire manor “at farm” (ad firmam). A “farmer” (firmarius) paid the lord a fixed annual “farm” (firma) for the right to exploit the lord’s property and take whatever profit he could. The distant or unprofitable manor was usually “farmed” first and other manors followed until a lord’s personal management of his property often ceased entirely. The rising popularity of this expedient made direct management of demesne by lord rare by c. 1425. The lord often became a rentier bound to a fixed income. The tenurial transformation was completed when the lord sold to the peasant his right of lordship, a surrender to the peasant of outright possession of his holding for a fixed cash rent and freedom from dues and services. Manorialism, in effect, collapsed and was gone from western and central Europe by 1500.

The landlord’s discomfort ultimately benefited the peasantry. Lower prices for foodstuffs and greater purchasing power from the last quarter of the fourteenth century onward, progressive disintegration of demesnes, and waning customary land tenure enabled the enterprising, ambitious peasant to lease or purchase property and become a substantial landed proprietor. The average size of the peasant holding grew in the late Middle Ages. Due to the peasant’s generally improved standard of living, the century and a half following the magna pestilencia has been labeled a “golden age” in which the most successful peasant became a “yeoman” or “kulak” within the village community. Freed from labor service, holding a fixed copyhold lease, and enjoying greater disposable income, the peasant exploited his land exclusively for his personal benefit and often pursued leisure and some of the finer things in life. Consumption of meat by England’s humbler social strata rose substantially after the Black Death, a shift in consumer tastes that reduced demand for grain and helped make viable the shift toward pastoralism in the countryside. Late medieval sumptuary legislation, intended to keep the humble from dressing above his station and retain the distinction between low— and highborn, attests both to the peasant’s greater income and the desire of the elite to limit disorienting social change (Dyer, 1989; Gottfried, 1983; Hunt and Murray, 1999).

The Black Death, moreover, profoundly altered the contours of settlement in the countryside. Catastrophic loss of population led to abandonment of less attractive fields, contraction of existing settlements, and even wholesale desertion of villages. More than 1300 English villages vanished between 1350 and 1500. French and Dutch villagers abandoned isolated farmsteads and huddled in smaller villages while their Italian counterparts vacated remote settlements and shunned less desirable fields. The German countryside was mottled with abandoned settlements. Two thirds of named villages disappeared in Thuringia, Anhalt, and the eastern Harz mountains, one fifth in southwestern Germany, and one third in the Rhenish palatinate, abandonment far exceeding loss of population and possibly arising from migration from smaller to larger villages (Gottfried, 1983; Pounds, 1974).

The Black Death and the Commercial Economy

As with agriculture, assessment of the Black Death’s impact on the economy’s commercial sector is a complex problem. The vibrancy of the high medieval economy is generally conceded. As the first millennium gave way to the second, urban life revived, trade and manufacturing flourished, merchant and craft gilds emerged, commercial and financial innovations proliferated (e.g., partnerships, maritime insurance, double—entry bookkeeping, fair letters, letters of credit, bills of exchange, loan contracts, merchant banking, etc.). The integration of the high medieval economy reached its zenith c. 1250 to c. 1325 with the rise of large companies with international interests, such as the Bonsignori of Siena and the Buonaccorsi of Florence and the emergence of so—called “super companies” such as the Florentine Bardi, Peruzzi, and Acciaiuoli (Hunt and Murray, 1999).

How to characterize the late medieval economy has been more fraught with controversy, however. Historians a century past, uncomprehending of how their modern world could be rooted in a retrograde economy, imagined an entrepreneurially creative and expansive late medieval economy. Succeeding generations of historians darkened this optimistic portrait and fashioned a late Middle Ages of unmitigated decline, an “age of adversity” in which the economy was placed under the rubric “depression of the late Middle Ages.” The historiographical pendulum now swings away from this interpretation and a more nuanced picture has emerged that gives the Black Death’s impact on commerce its full due but emphasizes the variety of the plague’s impact from merchant to merchant, industry to industry, and city to city. Success or failure was equally possible after the Black Death and the game favored adaptability, creativity, nimbleness, opportunism, and foresight.

Once the magna pestilencia had passed, the city had to cope with a labor supply even more greatly decimated than in the countryside due to a generally higher urban death rate. The city, however, could reverse some of this damage by attracting, as it had for centuries, new workers from the countryside, a phenomenon that deepened the crisis for the manorial lord and contributed to changes in rural settlement. A resurgence of the slave trade occurred in the Mediterranean, especially in Italy, where the female slave from Asia or Africa entered domestic service in the city and the male slave toiled in the countryside. Finding more labor was not, however, a panacea. A peasant or slave performed an unskilled task adequately but could not necessarily replace a skilled laborer. The gross loss of talent due to the plague caused a decline in per capita productivity by skilled labor remediable only by time and training (Hunt and Murray, 1999; Miskimin, 1975).

Another immediate consequence of the Black Death was dislocation of the demand for goods. A suddenly and sharply smaller population ensured a glut of manufactured and trade goods, whose prices plummeted for a time. The businessman who successfully weathered this short—term imbalance in supply and demand then had to reshape his business’ output to fit a declining or at best stagnant pool of potential customers.

The Black Death transformed the structure of demand as well. While the standard of living of the peasant improved, chronically low prices for grain and other agricultural products from the late fourteenth century may have deprived the peasant of the additional income to purchase enough manufactured or trade items to fill the hole in commercial demand. In the city, however, the plague concentrated wealth, often substantial family fortunes, in fewer and often younger hands, a circumstance that, when coupled with lower prices for grain, left greater per capita disposable income. The plague’s psychological impact, moreover, it is believed, influenced how this windfall was used. Pessimism and the specter of death spurred an individualistic pursuit of pleasure, a hedonism that manifested itself in the purchase of luxuries, especially in Italy. Even with a reduced population, the gross volume of luxury goods manufactured and sold rose, a pattern of consumption that endured even after the extra income had been spent within a generation or so after the magna pestilencia.

Like the manorial lord, the affluent urban bourgeois sometimes employed structural impediments to block the ambitious parvenu from joining his ranks and becoming a competitor. A tendency toward limiting the status of gild master to the son or son—in—law of a sitting master, evident in the first half of the fourteenth century, gained further impetus after the Black Death. The need for more journeymen after the plague was conceded in the shortening of terms of apprenticeship, but the newly minted journeyman often discovered that his chance of breaking through the glass ceiling and becoming a master was virtually nil without an entrée through kinship. Women also were banished from gilds as unwanted competition. The urban wage laborer, by and large controlled by the gilds, was denied membership and had no access to urban structures of power, a potent source of frustration. While these measures may have permitted the bourgeois to hold his ground for a time, the winds of change were blowing in the city as well as the countryside and gild monopolies and gild restrictions were fraying by the close of the Middle Ages.

In the new climate created by the Black Death, the individual businessman did retain an advantage: the business judgment and techniques honed during the high Middle Ages. This was crucial in a contracting economy in which gross productivity never attained its high medieval peak and in which the prevailing pattern was boom and bust on a roughly generational basis. A fluctuating economy demanded adaptability and the most successful post—plague businessman not merely weathered bad times but located opportunities within adversity and exploited them. The post—plague entrepreneur’s preference for short—term rather than long—term ventures, once believed a product of a gloomy despondency caused by the plague and exacerbated by endemic violence, decay of traditional institutions, and nearly continuous warfare, is now viewed as a judicious desire to leave open entrepreneurial options, to manage risk effectively, and to take advantage of whatever better opportunity arose. The successful post—plague businessman observed markets closely and responded to them while exercising strict control over his concern, looking for greater efficiency, and trimming costs (Hunt and Murray, 1999).

The fortunes of the textile industry, a trade singularly susceptible to contracting markets and rising wages, best underscores the importance of flexibility. Competition among textile manufacturers, already great even before the Black Death due to excess productive capacity, was magnified when England entered the market for low— and medium—quality woolen cloth after the magna pestilencia and was exporting forty—thousand pieces annually by 1400. The English took advantage of proximity to raw material, wool England itself produced, a pattern increasingly common in late medieval business. When English producers were undeterred by a Flemish embargo on English cloth, the Flemish and Italians, the textile trade’s other principal players, were compelled to adapt in order to compete. Flemish producers that emphasized higher—grade, luxury textiles or that purchased, improved, and resold cheaper English cloth prospered while those that stubbornly competed head—to—head with the English in lower—quality woolens suffered. The Italians not only produced luxury woolens, improved their domestically—produced wool, found sources for wool outside England (Spain), and increased production of linen but also produced silks and cottons, once only imported into Europe from the East (Hunt and Murray, 1999).

The new mentality of the successful post—plague businessman is exemplified by the Florentines Gregorio Dati and Buonaccorso Pitti and especially the celebrated merchant of Prato, Francesco di Marco Datini. The large companies and super companies, some of which failed even before the Black Death, were not well suited to the post—plague commercial economy. Datini’s family business, with its limited geographical ambitions, better exercised control, was more nimble and flexible as opportunities vanished or materialized, and more effectively managed risk, all keys to success. Datini through voluminous correspondence with his business associates, subordinates, and agents and his conspicuously careful and regular accounting grasped the reins of his concern tightly. He insulated himself from undue risk by never committing too heavily to any individual venture, by dividing cargoes among ships or by insuring them, by never lending money to notoriously uncreditworthy princes, and by remaining as apolitical as he could. His energy and drive to complete every business venture likewise served him well and made him an exemplar for commercial success in a challenging era (Origo, 1957; Hunt and Murray, 1999).

The Black Death and Popular Rebellion

The late medieval popular uprising, a phenomenon with undeniable economic ramifications, is often linked with the demographic, cultural, social, and economic reshuffling caused by the Black Death; however, the connection between pestilence and revolt is neither exclusive nor linear. Any single uprising is rarely susceptible to a single—cause analysis and just as rarely was a single socioeconomic interest group the fomenter of disorder. The outbreak of rebellion in the first half of the fourteenth century (e.g., in urban [1302] and maritime [1325—28] Flanders and in English monastic towns [1326—27]) indicates the existence of socioeconomic and political disgruntlement well before the Black Death.

Some explanations for popular uprising, such as the placing of immediate stresses on the populace and the cumulative effect of centuries of oppression by manorial lords, are now largely dismissed. At times of greatest stress —— the Great Famine and the Black Death —— disorder but no large—scale, organized uprising materialized. Manorial oppression likewise is difficult to defend when the peasant in the plague’s aftermath was often enjoying better pay, reduced dues and services, broader opportunities, and a higher standard of living. Detailed study of the participants in the revolts most often labeled “peasant” uprisings has revealed the central involvement and apparent common cause of urban and rural tradesmen and craftsmen, not only manorial serfs.

The Black Death may indeed have made its greatest contribution to popular rebellion by expanding the peasant’s horizons and fueling a sense of grievance at the pace of change, not at its absence. The plague may also have undercut adherence to the notion of a divinely—sanctioned, static social order and buffeted a belief that preservation of manorial socioeconomic arrangements was essential to the survival of all, which in turn may have raised receptiveness to the apocalyptic socially revolutionary message of preachers like England’s John Ball. After the Black Death, change was inevitable and apparent to all.

The reasons for any individual rebellion were complex. Measures in the environs of Paris to check wage hikes caused by the plague doubtless fanned discontent and contributed to the outbreak of the Jacquerie of 1358 but high taxation to finance the Hundred Years’ War, depredation by marauding mercenary bands in the French countryside, and the peasantry’s conviction that the nobility had failed them in war roiled popular discontent. In the related urban revolt led by étienne Marcel (1355—58), tensions arose from the Parisian bourgeoisie’s discontent with the war’s progress, the crown’s imposition of regressive sales and head taxes, and devaluation of currency rather than change attributable to the Black Death.

In the English Peasants’ Rebellion of 1381, continued enforcement of the Statute of Laborers no doubt rankled and perhaps made the peasantry more open to provocative sermonizing but labor legislation had not halted higher wages or improvement in the standard of living for peasant. It seems likely that discontent may have arisen from an unsatisfying pace of improvement of the peasant’s lot. The regressive Poll Taxes of 1380 and 1381 also contributed to the discontent. It is furthermore noteworthy that the rebellion began in relatively affluent eastern England, not in the poorer west or north.

In the Ciompi revolt in Florence (1378—83), restrictive gild regulations and denial of political voice to workers due to the Black Death raised tensions; however, Florence’s war with the papacy and an economic slump in the 1370s resulting in devaluation of the penny in which the worker was paid were equally if not more important in fomenting unrest. Once the value of the penny was restored to its former level in 1383 the rebellion in fact subsided.

In sum, the Black Death played some role in each uprising but, as with many medieval phenomena, it is difficult to gauge its importance relative to other causes. Perhaps the plague’s greatest contribution to unrest lay in its fostering of a shrinking economy that for a time was less able to absorb socioeconomic tensions than had the growing high medieval economy. The rebellions in any event achieved little. Promises made to the rebels were invariably broken and brutal reprisals often followed. The lot of the lower socioeconomic strata was improved incrementally by the larger economic changes already at work. Viewed from this perspective, the Black Death may have had more influence in resolving the worker’s grievances than in spurring revolt.

Conclusion

The European economy at the close of the Middle Ages (c. 1500) differed fundamentally from the pre—plague economy. In the countryside, a freer peasant derived greater material benefit from his toil. Fixed rents if not outright ownership of land had largely displaced customary dues and services and, despite low grain prices, the peasant more readily fed himself and his family from his own land and produced a surplus for the market. Yields improved as reduced population permitted a greater focus on fertile lands and more frequent fallowing, a beneficial phenomenon for the peasant. More pronounced socioeconomic gradations developed among peasants as some, especially more prosperous ones, exploited the changed circumstances, especially the availability of land. The peasant’s gain was the lord’s loss. As the Middle Ages waned, the lord was commonly a pure rentier whose income was subject to the depredations of inflation.

In trade and manufacturing, the relative ease of success during the high Middle Ages gave way to greater competition, which rewarded better business practices and leaner, meaner, and more efficient concerns. Greater sensitivity to the market and the cutting of costs ultimately rewarded the European consumer with a wider range of good at better prices.

In the long term, the demographic restructuring caused by the Black Death perhaps fostered the possibility of new economic growth. The pestilence returned Europe’s population roughly its level c. 1100. As one scholar notes, the Black Death, unlike other catastrophes, destroyed people but not property and the attenuated population was left with the whole of Europe’s resources to exploit, resources far more substantial by 1347 than they had been two and a half centuries earlier, when they had been created from the ground up. In this environment, survivors also benefited from the technological and commercial skills developed during the course of the high Middle Ages. Viewed from another perspective, the Black Death was a cataclysmic event and retrenchment was inevitable, but it ultimately diminished economic impediments and opened new opportunity.

References and Further Reading:

Aberth, John. “The Black Death in the Diocese of Ely: The Evidence of the Bishop’s Register.” Journal of Medieval History 21 (1995): 275—87.

Aberth, John. From the Brink of the Apocalypse: Confronting Famine, War, Plague, and Death in the Later Middle Ages. New York: Routledge, 2001.

Aberth, John. The Black Death: The Great Mortality of 1348—1350, a Brief History with Documents . Boston and New York: Bedford/St. Martin’s, 2005.

Aston, T. H. and C. H. E. Philpin, eds. The Brenner Debate: Agrarian Class Structure and Economic Development in Pre—Industrial Europe. Cambridge: Cambridge University Press, 1985.

Bailey, Mark D. “Demographic Decline in Late Medieval England: Some Thoughts on Recent Research.” Economic History Review 49 (1996): 1—19.

Bailey, Mark D. A Marginal Economy? East Anglian Breckland in the Later Middle Ages. Cambridge: Cambridge University Press, 1989.

Benedictow, Ole J. The Black Death, 1346—1353: The Complete History. Woodbridge, Suffolk: Boydell Press, 2004.

Bleukx, Koenraad. “Was the Black Death (1348—49) a Real Plague Epidemic? England as a Case Study.” In Serta Devota in Memoriam Guillelmi Lourdaux. Pars Posterior: Cultura Medievalis, edited by W. Verbeke, M. Haverals, R. de Keyser, and J. Goossens, 64—113. Leuven: Leuven University Press, 1995.

Blockmans, Willem P. “The Social and Economic Effects of Plague in the Low Countries, 1349—1500.” Revue Belge de Philologie et d’Histoire 58 (1980): 833—63.

Bolton, Jim L. “‘The World Upside Down': Plague as an Agent of Economic and Social Change.” In The Black Death in England, edited by M. Ormrod and P. Lindley. Stamford: Paul Watkins, 1996.

Bowsky, William M. “The Impact of the Black Death upon Sienese Government and Society.” Speculum 38 (1964): 1—34.

Campbell, Bruce M. S. “Agricultural Progress in Medieval England: Some Evidence from Eastern Norfolk.” Economic History Review 36 (1983): 26—46.

Campbell, Bruce M. S., ed. Before the Black Death: Studies in the ‘Crisis’ of the Early Fourteenth Century. Manchester: Manchester University Press, 1991.

Cipolla, Carlo M. Before the Industrial Revolution: European Society and Economy, 1000—1700, Third edition. New York: Norton, 1994.

Cohn, Samuel K. The Black Death Transformed: Disease and Culture in Early Renaissance Europe. London: Edward Arnold, 2002.

Cohn, Sameul K. “After the Black Death: Labour Legislation and Attitudes toward Labour in Late—Medieval Western Europe.” Economic History Review 60 (2007): 457—85.

Davis, David E. “The Scarcity of Rats and the Black Death.” Journal of Interdisciplinary History 16 (1986): 455—70.

Davis, R. A. “The Effect of the Black Death on the Parish Priests of the Medieval Diocese of Coventry and Lichfield.” Bulletin of the Institute of Historical Research 62 (1989): 85—90.

Drancourt, Michel, Gerard Aboudharam, Michel Signoli, Olivier Detour, and Didier Raoult. “Detection of 400—Year—Old Yersinia Pestis DNA in Human Dental Pulp: An Approach to the Diagnosis of Ancient Septicemia.” Proceedings of the National Academy of the United States 95 (1998): 12637—40.

Dyer, Christopher. Standards of Living in the Middle Ages: Social Change in England, c. 1200—1520. Cambridge: Cambridge University Press, 1989.

Emery, Richard W. “The Black Death of 1348 in Perpignan.” Speculum 42 (1967): 611—23.

Farmer, David L. “Prices and Wages.” In The Agrarian History of England and Wales, Vol. II, edited H. E. Hallam, 715—817. Cambridge: Cambridge University Press, 1988.

Farmer, D. L. “Prices and Wages, 1350—1500.” In The Agrarian History of England and Wales, Vol. III, edited E. Miller, 431—94. Cambridge: Cambridge University Press, 1991.

Flinn, Michael W. “Plague in Europe and the Mediterranean Countries.” Journal of European Economic History 8 (1979): 131—48.

Freedman, Paul. The Origins of Peasant Servitude in Medieval Catalonia. New York: Cambridge University Press, 1991.

Gottfried, Robert. The Black Death: Natural and Human Disaster in Medieval Europe. New York: Free Press, 1983.

Gyug, Richard. “The Effects and Extent of the Black Death of 1348: New Evidence for Clerical Mortality in Barcelona.” Mediæval Studies 45 (1983): 385—98.

Harvey, Barbara F. “The Population Trend in England between 1300 and 1348.” Transactions of the Royal Historical Society 4th ser. 16 (1966): 23—42.

Harvey, P. D. A. A Medieval Oxfordshire Village: Cuxham, 1240—1400. London: Oxford University Press, 1965.

Hatcher, John. “England in the Aftermath of the Black Death.” Past and Present 144 (1994): 3—35.

Hatcher, John and Mark Bailey. Modelling the Middle Ages: The History and Theory of England’s Economic Development. Oxford: Oxford University Press, 2001.

Hatcher, John. Plague, Population, and the English Economy 1348—1530. London and Basingstoke: MacMillan Press Ltd., 1977.

Herlihy, David. The Black Death and the Transformation of the West, edited by S. K. Cohn. Cambridge and London: Cambridge University Press, 1997.

Horrox, Rosemary, transl. and ed. The Black Death. Manchester: Manchester University Press, 1994.

Hunt, Edwin S.and James M. Murray. A History of Business in Medieval Europe, 1200—1550. Cambridge: Cambridge University Press, 1999.

Jordan, William C. The Great Famine: Northern Europe in the Early Fourteenth Century. Princeton: Princeton University Press, 1996.

Lehfeldt, Elizabeth, ed. The Black Death. Boston: Houghton and Mifflin, 2005.

Lerner, Robert E. The Age of Adversity: The Fourteenth Century. Ithaca: Cornell University Press, 1968.

Le Roy Ladurie, Emmanuel. The Peasants of Languedoc, transl. J. Day. Urbana: University of Illinois Press, 1976.

Lomas, Richard A. “The Black Death in County Durham.” Journal of Medieval History 15 (1989): 127—40.

McNeill, William H. Plagues and Peoples. Garden City, New York: Anchor Books, 1976.

Miskimin, Harry A. The Economy of the Early Renaissance, 1300—1460. Cambridge: Cambridge University Press, 1975.

Morris, Christopher “The Plague in Britain.” Historical Journal 14 (1971): 205—15.

Munro, John H. “The Symbiosis of Towns and Textiles: Urban Institutions and the Changing Fortunes of Cloth Manufacturing in the Low Countries and England, 1270—1570.” Journal of Early Modern History 3 (1999): 1—74.

Munro, John H. “Wage—Stickiness, Monetary Changes, and the Real Incomes in Late—Medieval England and the Low Countries, 1300—1500: Did Money Matter?” Research in Economic History 21 (2003): 185—297.

Origo. Iris The Merchant of Prato: Francesco di Marco Datini, 1335—1410. Boston: David R. Godine, 1957, 1986.

Platt, Colin. King Death: The Black Death and its Aftermath in Late—Medieval England. Toronto: University of Toronto Press, 1996.

Poos, Lawrence R. A Rural Society after the Black Death: Essex 1350—1575. Cambridge: Cambridge University Press, 1991.

Postan, Michael M. The Medieval Economy and Society: An Economic History of Britain in the Middle Ages. Harmondswworth, Middlesex: Penguin, 1975.

Pounds, Norman J. D. An Economic History of Europe. London: Longman, 1974.

Raoult, Didier, Gerard Aboudharam, Eric Crubézy, Georges Larrouy, Bertrand Ludes, and Michel Drancourt. “Molecular Identification by ‘Suicide PCR’ of Yersinia Pestis as the Agent of Medieval Black Death.” Proceedings of the National Academy of Sciences of the United States of America 97 (7 Nov. 2000): 12800—3.

Razi, Zvi “Family, Land, and the Village Community in Later Medieval England.” Past and Present 93 (1981): 3—36.

Russell, Josiah C. British Medieval Population. Albuquerque: University of New Mexico Press, 1948.

Scott, Susan and Christopher J. Duncan. Return of the Black Death: The World’s Deadliest Serial Killer. Chicester, West Sussex and Hoboken, NJ: Wiley, 2004.

Shrewsbury, John F. D. A History of Bubonic Plague in the British Isles. Cambridge: Cambridge University Press, 1970.

Twigg, Graham The Black Death: A Biological Reappraisal. London: Batsford Academic and Educational, 1984.

Waugh, Scott L. England in the Reign of Edward III. Cambridge: Cambridge University Press, 1991.

Ziegler, Philip. The Black Death. London: Penguin, 1969, 1987.

Citation: Routt, David. “The Economic Impact of the Black Death”. EH.Net Encyclopedia, edited by Robert Whaples. July 20, 2008. URL http://eh.net/encyclopedia/the-economic-impact-of-the-black-death/

Apprenticeship in the United States

Daniel Jacoby, University of Washington, Bothell

Once the principal means by which craft workers learned their trades, apprenticeship plays a relatively small part in American life today. The essence of this institution has always involved an exchange of labor for training, yet apprenticeship has been far from constant over time as its survival in the United States has required nearly continual adaptation to new challenges.

Four distinct challenges define the periods of major apprenticeship changes. The colonial period required the adaptation of Old World practices to New World contexts. In the era of the new republic, apprenticeship was challenged by the clash between traditional authority and the logic of expanding markets and contracts. The main concern after the Civil War was to find a training contract that could resolve the heightening tensions between organized labor and capital. Finally, in the modern era following World War I, industrialization’s skill-leveling effects constituted a challenge to apprenticeship against which it largely failed. Apprenticeship lost ground as schooling was instead increasingly sought as the vehicle for upward social mobility that offset the leveling effects of industrialization. After reviewing these episodes this essay concludes by speculating whether we are now in a new era of challenges that will reshape apprenticeship.

Apprenticeship came to American soil by way of England, where it was the first step on the road to economic independence. In England, master craftsmen hired apprentices in an exchange of training for service. Once their term of apprenticeship was completed, former apprentices traveled from employer to employer earning wages as journeymen. When, or if, they accumulated enough capital, journeymen set up shop as independent masters and became members of their craft guilds. These institutions had the power to bestow and withdraw rights and privileges upon their members, and thereby to regulate competition among themselves.

One major concern of the guilds was to prevent unrestricted trade entry and thus apprenticeship became the object of much regulation. Epstein (1998), however, argues that monopoly or rent-seeking activity (the deliberate production of scarcity) was only incidental to the guilds’ primary interest in supplying skilled workmen. To the extent that guilds successfully regulated apprenticeship in Britain, that pattern was less readily replicated in the Americas whose colonists came to exploit the bounty of natural resources under mercantilistic proscriptions that forbade most forms of manufacturing. The result was an agrarian society practically devoid of large towns and guilds. Absent these entities, the regulation of apprenticeship relied upon government actions that appear to have been become more pronounced towards the mid-eighteenth century. The passage of Britain’s 1563 Statute of Artificers involved government regulation in the Old World as well. However, as Davies (1956) shows, English apprenticeship was different in that craft guilds and their attendant traditions were more significant.

The Colonial Period

During the colonial period, the U.S was predominantly an agrarian society. As late as 1790 no city possessed a population in excess of 50,000. In 1740, the largest colonial city, Philadelphia, possessed 13,000 inhabitants. Even so, the colonies could not operate successfully without some skilled tradesmen in fields like carpentry, cordwaining (shoemaking), and coopering (barrel making). Neither the training of slaves, nor the immigration of skilled European workmen was sufficient to prevent labor short colonies from developing their own apprenticeship systems. No uniform system of apprenticeship developed because municipalities, and even states, lacked the authority either to enforce their rules outside their own jurisdictions or to restore distant runaways to their masters. Accordingly, apprenticeship remained a local institution.

Records from the colonial period are sparse, but both Philadelphia and Boston have preserved important evidence. In Philadelphia, Quimby (1963) traced official apprenticeship back, at least, to 1716. By 1745 the city had recorded 149 indentures in 33 crafts. The stock of apprentices grew more rapidly than did population and after an additional 25 years it had reached 537.

Quimby’s Colonial Philadelphia data indicate that apprenticeship typically consigned boys, aged 14 to 17, to serve their masters until their twenty-first birthdays. Girls, too, were apprenticed, but females comprised less than one-fifth of recorded indentures, most of whom were apprenticed to learn housewifery. One significant variation on the standard indenture involved the binding of parish orphans. Such paupers were usually indented to less remunerative trades, usually farming. Yet another variation involved the coveted apprenticeships with merchants, lawyers, and other professions. In these instances, parents usually paid masters beforehand to take their children.

Apprenticeship’s distinguishing feature was its contract of indenture, which elaborated the terms of the arrangement. This contract differed in two major ways from the contracts of indenture that bound immigrants. First, the apprenticeship contract involved young people and, as such, required the signature of their parents or guardians. Second, indentured servitude, which Galenson (1981) argues was adapted from apprenticeship, substituted Atlantic transportation for trade instruction in the exchange of a servant’s labor. Both forms of labor involved some degree of mutuality or voluntary agreement. In apprenticeship, however, legal or natural parents transferred legal authority over their child to another, the apprentice’s master, for a substantial portion of his or her youth. In exchange for rights to their child’s labor, parents were also relieved of direct responsibility for child rearing and occupational training. Thus the child’s consent could be of less consequence than that of the parents.

The articles of indenture typically required apprentices to serve their terms faithfully and obediently. Indentures commonly included clauses prohibiting specific behaviors, such a playing dice or fornication. Masters generally pledged themselves to raise, feed, lodge, educate, and train apprentices and then to provide “freedom dues” consisting of clothes, tools, or money once they completed the terms of their indentures. Parents or guardian were co-signatories of the agreements. Although practice in the American colonies is incompletely documented, we know that in Canada parents were held financially responsible to apprentice masters when their children ran away.

To enforce their contracts parties to the agreement could appeal to local magistrates. Problems arose for many reasons, but the long duration of the contract inevitably involved unforeseen contingencies giving rise to dissatisfactions with the arrangements. Unlike other simple exchanges of goods, the complications of child rearing inevitably made apprenticeship a messy concern.

The Early Republic

William Rorabaugh (1986) argues that the revolutionary era increased the complications inherent in apprenticeship. The rhetoric of independence could not be contained within the formal political realm involving relations between nations, but instead involved the interpersonal realms wherein the independence to govern one’s self challenged traditions of deference based upon social status. Freedom was increasingly equated with contractual relations and consent. However, exchange based on contract undermined the authority of masters. And so it was with servants and apprentices who, empowered by Republican ideology, began to challenge their masters conceiving themselves, not as willful children, but as free and independent citizens of the Revolution.

The revolutionary logic of contract ate away at the edges of the long-term apprenticeship relationship and such indentures became substantially less common in the first half of the nineteenth century. Gillian Hamilton (2000) has tested whether the decline in apprenticeship stemmed from problems in enforcing long-term contracts, or whether it was the result of a shift by employers to hire unskilled workers for factory work. While neither theory alone explains the decline in the stock of apprenticeship contracts, both demonstrate how emerging contractual relations undermined tradition by providing new choices. During this period she finds that masters began to pay their apprentices, that over time those payments rose more steeply with experience, and that indenture contracts were shortened, all of which suggest employers consciously patterned contracts to reduce the turnover that resulted when apprentices left for preferable situations. That employers increasingly preferred to be freed from the long-term obligations they owed their apprentices suggests that these responsibilities in loco parentis imposed burdens upon masters as well as apprentices. The payment of money wages reflected, in part, costs associated with their parental authorities that could now, more easily, be avoided in urban areas by shifting responsibilities back to youths and their parents.

Hamilton’s evidence comes from Montreal, where indentures were centrally recorded. While Canadian experiences differed in several identifiable ways from those in the United States, the broader trends she describes are consistent with those observed in the United States. In Frederick County Maryland, for example, Rorabaugh (1986) finds that the percentage of white males formally bound as apprentices fell from nearly 20% of boys aged 15 to 20 to less than 1% between 1800 and 1860. The U.S decline however, is more difficult to gage because informal apprenticeship arrangements that were not officially recorded appear to have risen. In key respects issues pertaining to the master’s authority remained an unresolved complication preventing a uniform apprenticeship system and encouraging informal apprenticeship arrangements into the period well after slavery was abolished.

Postbellum Period

While the Thirteenth Amendment to the U.S. Constitution in 1865 formally ended involuntary servitude, the boundary line between involuntary and voluntary contracts remained problematic, especially in regards to apprenticeship. Although courts explained that labor contracts enforced under penalty of imprisonment generally created involuntary servitude, employers explored contract terms that gave them unusual authority over their apprentices. States sometimes developed statutes to protect minors by prescribing the terms of legally enforceable apprenticeship indentures. Yet, doing so necessarily limited freedom of contract: making it difficult, if not impossible, to rearrange the terms of an apprenticeship agreement to fit any particular situation. Both the age of the apprentice and the length of the indenture agreement made the arrangement vulnerable to abuse. However, it proved extremely difficult for lawmakers to specify the precise circumstances warranting statutory indentures without making them unattractive. In good measure this was because representatives of labor and capital seldom agreed when it came to public policy regarding skilled employment. Yet, the need for some policy increased, especially after the labor scarcities created by the Civil War.

Companies, unions and governments all sought solutions to the shortages of skills caused by the Civil War. In Boston and Chicago, for example, women were recruited to perform skilled typography work that had previously been restricted to men. The Connecticut legislature authorized a new company to recruit and contract skilled workers from abroad. Other states either wrote new apprenticeship laws or experimented with new ways of training workers. The success of craft unionism was itself an indication of the dearth of organizations capable of implementing skill standards. Virtually any new action challenged the authority of either labor or capital, leading one or the other to contest them. Jacoby (1996) argues that the most important new strategy involved the introduction of short trade school courses intended to substitute for apprenticeship. Schooling fed employers’ hope that they might sidestep organized labor’s influence in determining the supply of skilled labor.

Independent of the expansion of schooling, issues pertaining to apprenticeship contract rights gained in importance. Firms like Philadelphia’s Baldwin Locomotive held back wages until contract completion in order to keep their apprentices with them. The closer young apprentices were bound to their employers, the less viable became organized labor’s demand to consult over or to unilaterally control the expansion or contraction of training. One-sided long-term apprenticeship contracts provided employers other advantages as well. Once under contract, competitors and unions could be legally enjoined for “enticing” their workers into breaking their contracts. Although employers rarely brought suit against each other for enticement of their apprentices, their associations, like the Metal Manufactures Association in Philadelphia, prevented apprentices from leaving one master for another by requiring consent and recommendation of member employers (Howell, 2000). Employer associations could, in this way, effectively blacklist union supporters and require apprentices to break strikes.

These employer actions did not occur in a vacuum. Many businessmen faulted labor for tying their hands when responding to increased demands for labor. Unions lost support among the working class when they restricted the number of apprentices an employer could hire. Such restrictions frequently involved ethnic, racial and gender preferences that locked minorities out of the well-paid crafts. Organized labor’s control was, nonetheless, less effective than it would have liked: It could not restrict non-union firms from taking on apprentices nor was it able to stem the flow of half-trained craftsmen from the small towns where apprenticeship standards were weak. Yet by fines, boycotts, and walkouts organized labor did intimidate workers and firms who disregarded their rules. Such actions failed to endear it to less skilled workers, who often regarded skilled unionists as a conservative aristocracy only slightly less onerous, if at all, than big business.

This weakness in labor’s support made it vulnerable to Colonel Richard T Auchmuty’s New York Trade School. Auchmuty’s school, begun in 1881, became the leading institution challenging labor’s control over its own supply. The school was designed and marketed as an alternative to apprenticeship and Auchmuty encouraged its use as a weapon in “the battle for the boys” waged by New York City Plumbers in 1886-87. Those years mark the starting point for a series of skirmishes between organized capital and labor in which momentum seesawed back and forth. Those battles encouraged public officials and educators to get involved. Where the public sector took greater interest in training, schooling more frequently supplemented, rather than replaced, on-the-job apprenticeship training. Public involvement also helped formalized the structure of trade learning in ways that apprenticeship laws had failed to do.

The Modern Era

In 1917, with the benefit of prior collaborations involving the public sector, a coalition of labor, business and social services secured passage of the Smith-Hughes Law to provide federal aid for vocational education. Despite this broad support, it is not clear that the bill would have passed had it not been for America’s entry into the First World War and the attendant priority for an increase in the supply of skilled labor. Prior to this law, demands for skilled labor had been partially muted by new mass production technologies and scientific management, both of which reduced industry’s reliance upon craft workers. War changed the equation.

Not only did war spur the Wilson administration into training more workers, it also raised organized labor’s visibility in industries, like shipbuilding, where it had previously been locked out. Under Smith-Hughes, cities as distant as Seattle and New York invited unions to join formal training partnerships. In the twenties, a number of schools systems provided apprentice extension classes where prior employment was made prerequisite, thereby limiting public apprenticeship support to workers who were already unionized. These arrangements made it easier for organized labor to control entry into the craft. This was most true in the building trades, where the unions remained well-organized throughout the twenties. However, in the twenties, the fast expanding factory sector more successfully reduced union influence. The largest firms, such as the General Electric Company, had long since set up their own non-union–usually informal–apprenticeship plans. Large firms able to provide significant employment security, like those that belonged to the National Association for Corporation Schools, typically operated in a union-free environment, which enabled them to establish training arrangements that were flexible and responsive to their needs.

The depression in the early thirties stopped nearly all training. Moreover, the prior industrial transformation shifted power within organized labor from the American Federation of Labor’s bedrock craft unions to the Congress of Industrial Organizations. With this change labor increasingly emphasized pay equality by narrowing skill differentials and accordingly de-emphasized training issues. Even so, by the late 1930s shortages of skilled workers were again felt that led to a national apprenticeship plan. Under the Fitzgerald Act (1937), apprenticeship standards were formalized in indentures that specified the kinds and quantity of training to be provided, as well as the responsibilities of joint labor-management apprenticeship committees. Standards helped minimize incentives to abuse low-wage apprentices through inadequate training and advancement. Nationally, however, the percentage of apprentices nationally remained very small, and overall young people increasingly chose formal education rather than apprenticeship to open opportunity. While the Fitzgerald Law worked to protect labor’s immediate interests, very few firms chose formal apprenticeships when less structured training relationships were possible.

This system persisted through the heyday of organized labor in the forties and fifties, but began to come undone in the late sixties and seventies, particularly when Civil Rights groups attacked the racial and gender discrimination too often used to ration scarce apprenticeship opportunities. Discrimination was sometimes passive, occurring as the result of preferential treatment extended to the sons and friends of craft workers, while in other instances it involved active and deliberate policies aimed at exclusion (Hill, 1968). Affirmative action accords and court orders have forced unions and firms to provide more apprenticeship opportunities for minorities.

Along with a declining influence of labor and civil rights organizations, work relations appear to have changed as we begin the new millennium. Forms of labor contracting that provide fewer benefits and security are on the rise. Incomes once again have become more stratified by education and skill levels, making them a much more important issue. Gary Becker’s (1964) work on human capital theory has encouraged businessmen and educators to rethink the economics of training and apprenticeship. Conceptualizing training as an investment, theory suggests that enforceable long-term apprenticeships enable employers to increase their investments in the skills of their workers. Binding indentures are rationalized as efficient devices to prevent youths from absconding with the capital employers have invested in them. Armed with this understanding, increasingly policy makers have permitted and encouraged arrangements that look more like older-style employer dominated apprenticeships. Whether this is the beginning of new era for apprenticeship, or merely a return to the prior battles over the abuses of one-sided employer control, only time will tell.

References and further reading:

Becker, Gary. Human Capital. Chicago: University of Chicago Press, 1964.

Davies, Margaret. The Enforcement of English Apprenticeship, 1563-1642. Cambridge, MA: Harvard University Press, 1956.

Douglas, Paul. American Apprenticeship and Industrial Education. New York: Columbia University Press, 1921.

Elbaum, Bernard. “Why Apprenticeship Persisted in Britain but Not in the United States.” Journal of Economic History 49 (1989): 337-49.

Epstein, S. R. “Craft Guilds, Apprenticeship and Technological Change in Pre-industrial Europe.” Journal of Economic History 58, no. 3 (1998): 684-713.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Hamilton, Gillian. “The Decline of Apprenticeship in North America: Evidence from Montreal.” Journal of Economic History 60, no. 3, (2000): 627-664.

Harris, Howell John. Bloodless Victories: The Rise and Decline of the Open Shop Movement in Philadelphia; 1890-1940. New York: Cambridge University Press, 2000.

Hill, Herbert. “The Racial Practices of Organized Labor: The Contemporary Record.” In The Negro and The American Labor Movement, edited by Julius Jacobson. Garden City, New York: Doubleday Press, 1968.

Jacoby, Daniel. “The Transformation of Industrial Apprenticeship in the United States.” Journal of Economic History 52, no. 4 (1991): 887- 910.

Jacoby, Daniel. “Plumbing the Origins of American Vocationalism.” Labor History 37, no. 2 (1996): 235-272.

Licht, Walter. Getting Work: Philadelphia, 1840-1950. Cambridge, MA: Harvard University Press, 1992.

Quimby, Ian M.G. “Apprenticeship in Colonial Philadelphia.” Ph.D. Dissertation, University of Delaware, 1963.

Rorabaugh, William. The Craft Apprentice from Franklin to the Machine Age in America. New York: Oxford University Press, 1986.

Citation: Cuff, Timothy. “Historical Anthropometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 29, 2004. URL http://eh.net/encyclopedia/apprenticeship-in-the-united-states/

Historical Anthropometrics

Timothy Cuff, Westminster College

Historical anthropometrics is the study of patterns in human body size and their correlates over time. While social researchers, public health specialists and physical anthropologists have long utilized anthropometric measures as indicators of well-being, only within the past three decades have historians begun to use such data extensively. Adult stature is a cumulative indicator of net nutritional status over the growth years, and thus reflects command over food and access to healthful surroundings. Since expenditures for these items comprised such a high percentage of family income for historical communities, mean stature can be used to examine changes in a population’s economic circumstances over time and to compare the well-being of different groups with similar genetic height potential. Anthropometric measures are available for portions of many national populations as far back as the early 1700s. While these data often serve as complements to standard economic indicators, in some cases they provide the only means of assessing historical economic well-being, as “conventional” measures such as per capita GDP, wage and price indices, and income inequality measures have been notoriously spotty and problematic to develop. Anthropometric-based research findings to date have contributed to the scholarly debates over mortality trends, the nature of slavery, and the outcomes of industrialization and economic development. Height has been the primary indicator utilized to date. Other indicators include height-standardized weight indices, birth weight, and age at menarche. Potentially even more important, historical anthropometrics broadens the understanding of “well-being” beyond the one dimensional “ruler” of income, providing another lens through which the quality of historical life can be viewed.

This article:

  • provides a brief background of the field including a history of human body measurement and analysis and a description of the biological foundations for historical anthropometrics,
  • describes the current state of the field (along with methodological issues) and future directions, and
  • provides a selective bibliography.

Anthropometrics: Historical and Bio-Medical Background

The Evolution of Body Measurement and Analysis in Context

The measurement and description of the human form in the West date back to the artists of classical civilizations, but the rationale for systematic, large-scale body measurement and record keeping emerged out of the needs of early modern military organizations. By the mid-eighteenth century height commonly provided a means of classifying men into and of identifying them within military units and the procedures for measuring individuals entering military service were well established. The military’s need to identify recruits has provided most historical measurements of young men.

Scientific curiosity in the eighteenth century also spurred development of the first textbooks on human growth, although they were more concerned with growth patterns throughout life than with stature differences across groups or over time. In the nineteenth century class differences in height were easily observable in England. The moral outrage generated by the “tiny children” (Charles Dickens’ “Oliver Twists”) along with the view that medicine had a preventive as well as a curative function, meant that anthropometry was directed primarily at the poor, especially children toiling in the factories of English and French industrial cities. Later, fear in Britain over the “degeneration” of its men and their potential as an effective fighting force provided motivation for large-scale anthropometric surveys, as did efforts evolving out of the child-welfare movement. The early-twentieth century saw the establishment of a series of longitudinal population surveys (which follow individuals as they age) in North America and in Europe. In some cases this work was directed toward the generation of growth standards, while other efforts evaluated social-class differences among children. Such studies can be seen as transitional steps between contemporary and historical anthropometrics. Since the 1950s, anthropometry has been utilized for a variety of purposes in both the developed and underdeveloped world. Population groups have been measured in order to refine growth standards, to monitor the nutritional status of individuals and populations during famines and political disturbances, and to evaluate the effectiveness of economic development programs.

Anthropometric studies today can be classified as one of three types. Auxologists perform basic research, collecting body measurements over the human life cycle to further detail standards of physical development for twenty-first century populations. The second focus, a continuation of nineteenth century work, documents the living standards of children often supporting regulatory legislation or government aid policies. The third direction is historical anthropometrics. Economists, historians, and anthropologists specializing in this field seek to assess, in physical terms, the well-being of previous societies and the factors which influenced it.

Human Growth and Development: The Biological Foundations of Historical Anthropometrics

While historical anthropometric research is a relatively recent development, an extensive body of medical literature relating nutrition and epidemiological conditions to physical growth provides a strong theoretical underpinning. Bio-medical literature, along with the World Health Organization, describes mean stature as one of the best measures of overall health conditions within a society.

Final attained height and height by age both result from a complex interaction of genetic endowment and environmental effects. At the level of the individual, genetics is a strong but not exclusive influence on the determination of final height and of growth patterns. Genetics is most important when net nutrition is optimal. However, when evaluating differences among groups of people in sub-optimal nutritional circumstances environmental influences predominate.

The same nutritional regime can result in different final stature for particular individuals, because of genetic variation in the ability to continue growing in the face of adverse nutritional circumstances, epidemiological environments, or work requirements. However, the genetic height potential of most Europeans, Africans, and North Americans of European or African ancestry is comparable; i.e., under equivalent environmental circumstances the groups have achieved nearly identical mean adult stature. For example, in many parts of rural Africa, mean adult heights today are similar to those of Africans of 150 years ago, while well-fed urban Africans attain final heights similar to current-day Europeans and North Americans of European descent. Differences in nutritional status do result in wide variation in adult height even within populations of the same genetic make-up. For example, individuals from higher socio-economic classes tend to be taller than their lower class counterparts whether in impoverished third-world countries or in the developed nations.

Height is the most commonly utilized, but not the only, anthropometric indicator of nutritional status. The growth profile is another. Environmental conditions, while affecting the timing of growth (the ages at which accelerations and decelerations in growth rates occur), do not affect the overall pattern (the sequence in which growth/maturation events occur). The body seems to be self-stabilizing, postponing growth until caloric levels will support it and maintaining genetically programmed body proportions more rigidly than size potential. While final adult height and length of the growth period are not absolutely linked, populations which stop growing earlier usually, although not universally, end up being taller. Age at menarche, birth weight, and weight-for-height are also useful. Age at menarche (i.e. the first occurrence of menstruation) is not a measure of physical size, but of sexual maturation. Menarche generally occurs earlier among well-nourished women. Average menarcheal age in the developed West is about 13 years, while in the middle of the nineteenth century it was between 15 and 16 years among European women. Areas which have not experienced nutritional improvement over the past century have not witnessed decreases in the age at menarche. Infant birth weight, an indicator of long-term maternal nutritional status, is influenced by the mother’s diet, work intensity, quality of health care, maternal size and the number of children she has delivered, as well as the mother’s health practices. The level of economic inequality and social class status are also correlated with birth weight variation, although these variables reflect some of the factors noted above. However, because the mother’s diet and health status are such strong influences on birth weight, it provides another useful means of monitoring women’s well-being. Height-for-weight indices, particularly the body mass index (BMI), have seen some use by anthropometric historians. Contemporary bio-medical research which links BMI levels and mortality risk hints at the promise which this measure might hold for historians. However, the limited availability of weight measurements before the mid-nineteenth century will limit the studies which can be produced.

Improvements in net nutritional status, both across wide segments of the population in developed countries and within urban areas of less-developed countries (LDCs), are generally accepted as the most salient influence on growth patterns and final stature. The widely experienced improvement in net nutrition which was apparent in most of the developed world across most of the twentieth century and more recently in the “modern” sector of some LDCs has lead to a secular trend, the unidirectional trend toward greater stature and faster maturation. Before the twentieth century, height cycling without a distinct direction was the dominant historical pattern. (Two other sources of stature increase have been hypothesized but have garnered little support among the medical community: the increased practice of infantile smallpox vaccination and heterosis (hybrid vigor), i.e. varietal cross-breeding within a species which produces offspring who are larger or stronger than either parent.)

The Definition and Determination of Nutritional Status

“Nutritional status” is a term critical to an understanding of anthropometrics. It encompasses more than simply diet, i.e. the intake of calories and nutrients, and is thus distinct from the more common term “nutrition.” While nutrition refers to the quantity and quality of food inputs to the human biological system, it makes no reference to the amounts needed for healthy functioning resulting from nutrient demand placed on the individual. Nutritional status, or synonymously “net nutrition,” refers to the summing up of nutrient input and demand on those nutrients. While work intensity is the most obvious demand, it is just one of many. Energy is required to resist infection. Pregnancy adds caloric and nutrient demands, as does breast-feeding. Calories expended in any of these fashions are available neither for basal metabolism, nor for growth. The difference between nutrition and nutritional status/net nutrition is important for anthropometrics, because it is the latter, not the former, for which auxological measurements are a proxy.

Human biologists and medical scientists generally agree that within genetically similar populations net nutrition is the primary determinant of adult physical stature. Height, as Bielicki notes, is “socially induced variation.” Figure 1 indicates the numerous channels of influence on the final adult stature of any individual. Anthropometric indicators reflect the relative ease or difficulty of acquiring sufficient nutrients to provide for growth in excess of the immediate needs of the body. Nutritional status and physical stature clearly are composite measures of well-being linked to economic processes. However, the link is mediated through a variety of social circumstances, some volitional, others not. Hence, anthropometric historians must evaluate each situation within its own economic, cultural, and historical context.

In earlier societies, and in some less developed countries today, access to nutrients was determined primarily by control of arable land. As markets for food developed and urban living became predominant, for increasing percentages of the population, access to nutrients depended upon the ability to purchase food, i.e. on real income. Additionally, food allocation within the family is not determined by markets but by intra-household bargaining as well as by tastes and custom. For example, in some cultures households distribute food resources so as to ensure nutritional adequacy for those family members engaged in income or resource-generating activity in order to maximize earning power. The handful of studies which include historical anthropometric data for women reveal that stature trends by gender do not always move in concert. Rather, in periods of declining nutritional status, women often exhibited a reduction in stature levels before such changes appeared among males. This is somewhat paradoxical because biologists generally argue that women’s growth trajectories are more resistant to a diminution in nutritional status than are those of men. Though too little historical research has been done on this issue to speak with certainty, the pattern might imply that, in periods of nutritional stress, women bore the initial brunt of deprivation.

Other cultural practices, including the high status accorded to the use of certain foods, such as white flour, polished rice, tea or coffee may promote greater consumption of nutritionally less valuable foods among those able to afford them. This would tend to reduce the resultant stature differences by income. Access to nutrients also depends upon other individual choices. A small landholder might decide to market much of his farm’s high-value, high-protein meat and dairy products, reducing his family’s consumption of these nutritious food products in order to maximize money income. However, while material welfare would increase, biological welfare, knowingly or unknowingly, would decline.

Disease-exposure variation occurs as a result of some factors under the individual’s control and other factors which are determined at the societal level. Pathogen prevalence and potency and the level of community sanitation are critical factors which are not directly affected by individual decision making. However, housing and occupation are often individually chosen and do help to determine the extent of disease exposure. Once transportation improvements allow housing segregation based on socio-economic status to occur within large urban areas, residence location can become an important influence. However, prior to such, for example in mid-nineteenth century United States, urban childhood mortality levels were more influenced by the number of children in a family than by parental occupation or socio-economic status. The close proximity of the homes of the wealthy and the poor seems to have created a common level of exposure to infectious agents and equally poor sanitary conditions for children of all economic classes.

Work intensity, another factor determining nutritional status, is a function of the age at which youth enter the labor force, educational attainment, the physical exertion needed in a chosen occupation, and the level of technology. There are obvious feedback effects from current nutritional status to future nutritional status. A low level of nutritional status today might hinder full-time labor-force participation, and result in low incomes, poor housing, and substandard food consumption in subsequent periods as well, thereby reinforcing the cycle of nutritional inadequacy.

Historical Anthropometrics

Early Developments in the Field

Le Roy Ladurie’s studies of nineteenth-century French soldiers published in the late 1960s and early 1970s are recognized as the first works in the spirit of modern historical anthropometrics. He documented that stature among French recruits varied with their socio-economic characteristics. In the U.S., the research was carried forward in the late 1970s, much based on nineteenth-century records of U.S. slaves transported from the upper to the lower South. Studies of Caribbean slaves followed.

In the 1980s numerous anthropometric works were generated in connection with a National Bureau of Economic Research (NBER) directed study of American and European mortality trends from 1650 to the present, coordinated by Robert W. Fogel. Motivated in great part by the desire to evaluate Thomas McKeown’s hypothesis that improvements in nutrition were the critical component in mortality declines in the seventeenth through the nineteenth centuries, the project has lead to the creation of numerous large anthropometric data bases. These have been the starting point for the analysis of trends in physical stature and net nutritional status on both sides of the Atlantic. While most historical anthropometric studies published in the U.S. during the early and mid-1980s were either outgrowths of the NBER project or were conducted by students of Robert Fogel, such as Richard Steckel and John Komlos, mortality trends were no longer the sole focus of historical anthropometrics. Anthropometric statistics were used to analyze the effect of industrialization on the populations experiencing it, as well as the characteristics of slavery in the United States. The data sources were primarily military records or documents relating to slaves. As the 1980s became the 1990s the geographic range of stature studies moved beyond Europe and North American to include Asia, Australia, and Africa. Other data sources were utilized. These included records from schools and utopian communities, certificates of freedom for manumitted slaves, voter registration cards, newspaper advertisements for runaway slaves and indentured servants, insurance applications, and a variety of prison inmate records. The number of anthropometric historians also expanded considerably.

Findings to Date

Major achievements to date in historical anthropometrics include 1) the determination of the main outlines of the trend in physical stature in Europe and North America between the eighteenth and twentieth centuries, and 2) the emergence of several well-supported, although still debated, hypotheses pertaining to the relationship between height and the economic and social developments which accompanied modern economic growth in these centuries.

Historical research on human height has indicated how much healthier the New World environment was compared to that of Europe. Europeans who immigrated to North America, on average, obtained a net nutritional status far better than that which was possible for them to attain in their place of birth. Eighteenth century North Americans attained mean heights not achieved by Europeans until the twentieth century. The combination of lower population density, lower levels of income inequality, and greater food resources bestowed a great benefit upon those growing up in North America. This advantage is evident not only in adult heights but also in the earlier timing of the adolescent growth spurt, as well as the earlier attainment of final height.

Table 1
Mean Heights of Adult Males (in inches)

Table 1
Mean Heights of Adult Males (in inches)–>

North America Europe
European Ancestry African Ancestry Hungary England Sweden
1775 – 1783 1861 – 1865 1943 – 1944 1811 – 1861 1943 – 1944 1813 – 1835 1816 – 1821 1843 – 1886
68.1 68.5 68.1 67.0 67.9 64.2 65.8 66.3

Sources: U.S. whites, 1775-1783: Kenneth L. Sokoloff and Georgia C. Villaflor, “The Early Achievement of Modern Stature in America,” Social Science History 6 (1982): 453-481. U.S. whites, 1861-65: Robert Margo and Richard Steckel, “Heights of Native-Born Whites during the Antebellum Period,” Journal of Economic History 43 (1983): 167-174. U.S. whites and blacks, 1943-44: Bernard D. Karpinos, “Height and Weight of Selective Service Registrants Processed for Military Service during World War II,” Human Biology 40 (1958): 292-321, Table 5. U.S. blacks, 1811-1861: Robert Margo and Richard Steckel, “The Height of American Slaves: New Evidence on Slave Nutrition and Health,” Social Science History 6 (1982): 516-538, Table 1. Hungary: John Komlos. Nutrition and Economic Development in the Eighteenth Century Habsburg Monarchy, Princeton: Princeton University Press, 1989, Table 2.1, 57. Britain: Roderick Floud, Kenneth Wachter, and Annabel Gregory, Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980, Cambridge: Cambridge University Press, 1990, Table 4.1, 148. Sweden: Lars G. Sandberg and Richard Steckel, “Overpopulation and Malnutrition Rediscovered: Hard Times in 19th-Century Sweden,” Explorations in Economic History 25 (1988): 1-19, Table 2, 7.

Note: Dates refer to dates of measurement.

Stature Cycles in Europe and America

The early finding that there was not a unidirectional upward trend in stature since the 1700s startled researchers, whose expectations were based on recent experience. Extrapolating backward, Floud, Wachter, and Gregory note that such surprise was misplaced, for if the twentieth century’s rate of height increase had been occurring for several centuries, medieval Europeans would have been dwarfs or midgets. Instead, in Europe cycles in height were evident. Though smaller in amplitude than in Europe, stature cycling was a feature of the American experience, as well. At the time of the American Revolution, the Civil War, and World War II, the mean height of adult, native-born white males was a fraction over 68 inches (Table 1), but there was some variation in between these periods with a small decline in the years before the Civil War and perhaps another one from 1860 into the 1880s. Just before the turn of the twentieth century, mean stature began its relatively uninterrupted increase which continues to the present day. These findings are based primarily on military records drawn from the early national army, Civil War forces, West Point Cadets, and the Ohio National Guard, although other data sets show similar trends. The free black population seems to have experienced a downturn in physical stature very similar to that of whites in the pre-Civil War period. However, an exception to the antebellum diminution in nutritional status has been found among slave men.

Per Capita Income and Height

In addition to the cycling in height, anthropometric historians have documented that the intuitively anticipated positive correlation between mean height and per capita income holds at the national level in the twentieth century. Steckel has shown that, in cross-national comparison, the correlation between height and per capita income is as high as .84 to .90. However, since per capita income is highly correlated with a series of other variables that also affect height, the exact pathway through which income affects height is not fully clear. Among the factors which help to explain the variation are better diet, medicine, improvements in sanitary infrastructure, longer schooling, more sedentary life, and better housing. Intense work regimes and psycho-social stress, both of which affect growth negatively, might also be mitigated by greater per capita income. However, prior to the twentieth century the relationship between height and income was not monotonic. U.S. troops during the American Revolution were nearly as tall as U.S. soldiers sent to Europe and Japan in the 1940s, despite the fact that per capita income in the earlier period was substantially below that in the latter. Similarly, while per capita income in the U.S. in the late 1770s was below that of the British, the American troops had a height advantage of several inches over their British counterparts in the War of Independence.

Height and Income Inequality

The level of income inequality also has a powerful influence on mean heights. Steckel’s analysis of data for the twentieth century indicates that a 0.1 decrease in the Gini coefficient (indicating greater income equality) is associated with a gain in mean stature of about 3.7 cm (1.5 inches). In societies with great inequality, increases in per capita income have little effect on average stature if the gains accrue primarily to the wealthier segments of the society. Conversely, even without changes in average national per capita income, a reduction in inequality can have similar positive impact upon the stature and health of those at the lower rungs of the income ladder.

The high level of social inequality at the onset of modern economic growth in England is exemplified by the substantial disparity between the height of students of the Sandhurst Royal Military Academy, an elite institution, and the Marine Society, a home for destitute boys in London. The difference in mean height at age fourteen exceeded three inches in favor of the gentry. In some years the gap was even greater. Komlos has documented similar findings elsewhere: regardless of location, boys from “prestigious military schools in England, France, and Germany were much taller than the population at large.” A similar pattern existed in the nineteenth-century U.S. However, the social gap in the U.S. was miniscule compared to that prevailing in the Old World. Stature also varied by occupational groups. In eighteenth and nineteenth century Europe and North America, white collar and professional workers tended to be significantly taller than laborers and unskilled workers. However, farmers, being close to the source of nutrients and with fewer interactions with urban disease pools, tended to be the tallest, though their advantage disappeared by the twentieth century.

Regional and Rural-Urban Differences

Floud, Wachter, and Gregory have shown that, in early nineteenth century Britain, regional variation in stature dwarfed occupational differences. In 1815, Scotsmen, rural and urban, as well as the Irish, were about one-half an inch taller than the non-London urban English of the day. The rural English were slightly shorter, on average, than Englishmen born in small and medium sized towns. Londoners, however, had a mean height almost one-third of an inch less than other urban dwellers in England and more than three-quarters of an inch below the Irish or the Scots. A similar pattern held among convicts transported to New South Wales, Australia, except that the stature of the rural English was well above the average for all other English transported convicts. Floud, Wachter, and Gregory show a trend of convergence in height among these groups after 1800. The tendency for low population density rural areas in the nineteenth century to be home to the tallest individuals was apparent from the Habsburg Monarchy to Scotland, and in the remote northern regions of late nineteenth-century Sweden and Japan as well. In colonial America the rural-urban gradient did not exist. As cities grew, the rural born began to display a stature advantage over their urban brethren. This divergence persisted into the nineteenth century, and disappeared in the early twentieth century, when the urban-born gained a height advantage.

The Early-Industrial-Growth and Antebellum Puzzles

These patterns of stature variation have been put into a framework in both the European and the American contexts. Respectively they are known as the “early-industrial-growth puzzle” and the “Antebellum puzzle.” The commonality which has been identified is that in the early stages of industrialization and/or market integration, even with rising per capita incomes, the biological well-being of the populations undergoing such change does not, necessarily, improve immediately. Rather, for at least some portions of the population, biological well-being declined during this period of economic growth. Explanations for these paradoxes (or puzzles) are still being investigated and include: rising income inequality, the greater spread of disease through more thoroughly developed transportation and marketing systems and urban growth, the rising real price of food as population growth outstripped the agricultural system’s ability to provide, and the choice of farmers to market rather than consume high value/high protein crops.

Slave Heights

Research on slave heights has provided important insight into the living standards of these bound laborers. Large differences in stature have been documented between slaves on the North American mainland and those in the Caribbean. Adult mainland slaves, both women and men, were approximately two inches taller than those in the West Indies throughout the eighteenth and nineteenth centuries. Steckel argues that the growth pattern and infant mortality rates of U.S. slave children indicate that they were moderately to severely malnourished, with mean heights for four to nine year olds below the second percentile of modern growth standards and with mortality rates twice those estimated for the entire United States population. Although below the fifth percentile throughout childhood, as adults these slaves were relatively tall by nineteenth-century standards, reaching about the twenty-fifth percentile of today’s height distribution, taller than most European populations of the time.

Height’s Correlation with Other Biological Indicators

The evaluation of McKeown’s hypothesis that much of the modern decline in mortality rates could be traced to improvements in nutrition (food intake) was one of the early rationales for the modern study of historical stature. Subsequent work has presented evidence for the parallel cycling of height and life expectancy in the United States during the nineteenth century. The relationship between the body-mass index, morbidity, and mortality risk within historical populations has also been documented. Along a similar line, Sandberg and Steckel’s data on Sweden have pointed out the parallel nature of stature trends and childhood mortality rates in the mid-nineteenth century.

Economic and social history are not the only two fields which have felt historical anthropometrics’ impact. North American slave height-by-age profiles developed by Steckel have been used by auxologists to exemplify the range of possible growth patterns among humans. Based on findings within the biological sciences, historical studies of stature have come full circle and are providing those same sciences with new data on human physical potential.

Methodological Issues

Accuracy problems in military-based data sets arise predominantly from carelessness of the measurer or from intentional misreporting of data rather than from lack of orthodox practice. Inadequate concern for accuracy can most often be noticed in heaping (height observations rounded to whole feet, six inch increments, or even numbered inches) and lack of fractional measurements. These “rounding” errors tend to be self-canceling. Of greater concern is intentional misreporting of either height or age, because minimum stature and age restrictions were often applied to military recruits. Young men, eager to discover the “romance” of military life or receive the bounty which sometimes accompanied enlistment, were not impervious to slight fabrication of their age. Recruiting officers, hoping to meet their assigned quotas quickly, might have been tempted to round measurements up to the minimum height requirement. Hence, it is not uncommon to find height and age heaping at either the age or stature minima.

For anthropometric historians, the issue of the representativeness of the population under study is similar to that for any social historian, but several specific caveats are appropriate when considering military samples. In time of peace military recruits tend to be less representative of the general population than are wartime armies. The military, with fewer demands for personnel, can be more selective, often instituting more stringent height minima, and occasionally maxima, for recruits. Such policies, as well as the self-interested behaviors noted above, require those who would use military data sets to evaluate and potentially adjust the data to account for the observations missing due to either left or right tail truncation. A series of techniques to account for such difficulties in the data have been developed, although there is still debate over the most appropriate technique. Other data sets also exhibit selectivity biases, although of different natures. Prison registers clearly do not provide a random sample of the population. The filter, however, is not based on size or desire for “exciting” work – rather on the propensity for criminal activity and on the enforcement mechanism of the judicial system. The representativeness of anthropometric samples can also be affected by previous selection by the Grim Reaper. Within Afro-Caribbean slave populations in Trinidad, death rates were significantly higher for shorter individuals (at all ages) than for the taller ones. The result is that a select group of more robust and taller individuals remained alive for eventual measurement.

One difficulty faced by anthropometric historians is the association of this research, more imagined than real, with previous misuses of body measurement. Nineteenth century American phrenologists used skull shape and size as a means of determining intelligence and as a way of justifying the enslavement of African-Americans. The Bertillon approach to evaluating prison inmates included the measurement and classification of lips, ears, feet, nose, and limbs in an effort to discern a genetic or racial basis for criminality. The Nazis attempted to breed the perfect race by eliminating what they perceived to be physically “inferior” peoples. Each, appropriately, has made many squeamish in regard to the use of body measurements as an index of social development. Further, while the biological research which supports historical anthropometrics is scientifically well founded and fully justifies the approach, care must be exercised to ensure that the impression is not given that researchers either are searching for, or promoting, an “aristocracy of the tall.” Being tall is not necessarily better in all circumstances, although recent work does indicate a series of social and economic advantages do accrue to the tall. However, for populations enduring an on-going sub-optimal net nutritional regime, an increase in mean height does signify improvement in the net nutritional level, and thus the general level of physical well-being. Untangling the factors responsible for change in this social indicator is complicated and height is not a complete proxy for the quality of life. However, it does provide a valuable means of assessing biological well-being in the past and the influence of social and economic developments on health.

Future Directions

Historical anthropometrics is maturing. Over the past several years a series of state-of-the-field articles and anthologies of critical works have been written or compiled. Each summarizes past accomplishments, consolidates isolated findings into more generalized conclusions, and/or points out the next steps for researchers. In 2004, the editors of Social Science History devoted an entire volume to anthropometric history, drawing upon both current work and remembrances of many of the field’s early and prominent researchers, including an integrative essay by Komlos and Baten. Anthropometric history now has its own journal, as John Komlos, who has literally established a center for historical anthropometrics in Munich, created Economics and Biology, “devoted to the exploration of the effect of socio-economic processes on human beings as biological organisms.” Early issues highlight the wide geographic, temporal, and conceptual range of historical anthropometric studies. Another project which shows the great range of current effort is Richard Steckel’s work with anthropologists to characterize very long term patterns in the movement of mean human height. Already this collaboration has produced, The Backbone of History: Health and Nutrition in the Western Hemisphere, a compilation of essays documenting the biological well-being of New World populations beginning in 5000 B.C. using anthropological evidence. Its findings, consistent with those of some other recent anthropological studies, indicate a decline in health status for members of Western Hemisphere cultures in the pre-Columbian period as these societies began the transition from economies based on hunting and gathering to ones relying more heavily on settled agriculture. Steckel has been working to expand this approach to Europe via a collaborative and interdisciplinary project funded in part by the U.S. National Science Foundation, titled, “A History of Health in Europe from the Late Paleolithic Era to the Present.”

Yet even with these impressive steps, continued work, similar to early efforts in the field, is still needed. Expansion of the number and type of samples are important steps in the confirmation and consolidation of early results. One of the field’s on-going frustrations is that, except for slave records, few data sets contain physical measurements for large numbers of females. To date, female slaves and ex-slaves, some late nineteenth century U.S. college women, along with transported female convicts are the primary sources of female historical stature. Generalizations of research findings to entire populations are hindered by the small amount of data on females and the knowledge, from that data which are extant, that stature trends for the two sexes do not mimic each other. Similarly, upper class samples of either sex are not common. Future efforts should be directed at locating samples which contain data on these two understudied groups.

As Riley noted, the problem which anthropometric historians seek to resolve is not the identification of likely influences on stature. The biological sciences have provided that theoretical framework. The task at hand is to determine the relative weight of the various influences or, in Fogel’s terms, to perform “an accounting exercise of particularly complicated nature, which involves measuring not only the direct effect of particular factors but also their indirect effects and their interactions with other factors.”

More localized studies, with sample sizes adequate statistical analysis, are needed. These will allow the determination of the social, economic, and demographic factors most closely associated with human height variation. Other key areas of future investigation include the functional consequences of differences in biological well-being proxied by height, including differences in labor productivity and life expectancy. Even with the strides that have been made, in some corners, skepticism remains about the approach. To combat this, researchers must be careful to stress repeatedly what anthropometric indicators proxy, what their limits are, and how knowledge of anthropometric trends can appropriately influence our understanding of economic and social history as well as inform social policy. The field promises many future insights into the nature of and influences on historical human well-being and thus clues about how human well-being, the focus of economics generally, can be more fully and more widely advanced.

Selected Bibliography

Survey/Overview Publications

Engerman, Stanley. “The Standard of Living Debate in International Perspective: Measures and Indicators.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud, 17-46. Chicago: University of Chicago Press, 1997.

Floud, Roderick, and Bernard Harris. “Health, Height, and Welfare: Britain 1700-1980.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud, 91-126. Chicago: University of Chicago Press, 1997.

Floud, Roderick, Kenneth Wachter, and Annabelle Gregory. “The Heights of Europeans since 1750: A New Source for European Economic History.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 10-24. Chicago: University of Chicago Press, 1994.

Floud, Roderick, Kenneth Wachter, and Annabelle Gregory. Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980. Cambridge: Cambridge University Press, 1990.

Fogel, Robert W. “Nutrition and the Decline in Mortality since 1700: Some Preliminary Findings.” In Long-Term Factors in American Economic Growth, edited by Stanley Engerman and Robert Gallman, 439-527. Chicago: University of Chicago Press, 1987.

Haines, Michael R. “Growing Incomes, Shrinking People – Can Economic Development Be Hazardous to Your Health? Historical Evidence for the United States, England, and the Netherlands in the Nineteenth Century.” Social Science History 28 (2004): 249-70.

Haines, Michael R., Lee A. Craig, and Thomas Weiss. “The Short and the Dead: Nutrition, Mortality, and the ‘Antebellum Puzzle’ in the United States.” Journal of Economic History 63 (June 2003): 382-413.

Harris, Bernard. “Health, Height, History: An Overview of Recent Developments in Anthropometric History.” Social History of Medicine 7 (1994): 297-320.

Harris, Bernard. “The Height of Schoolchildren in Britain, 1900-1950.” In Stature, Living Standards and Economic Development: Essays in Anthropometric History, edited by John Komlos, 25-38. Chicago: University of Chicago Press, 1998.

Komlos, John, and Jörg Baten. The Biological Standard of Living in Comparative Perspectives: Proceedings of a Conference Held in Munich, January 18-23, 1997. Stuttgart: Franz Steiner Verlag, 1999.

Komlos, John, and Jörg Baten. “Looking Backward and Looking Forward: Anthropometric Research and the Development of Social Science History.” Social Science History 28 (2004): 191-210.

Komlos, John, and Timothy Cuff. Classics of Anthropometric History: A Selected Anthology, St. Katharinen, Germany: Scripta Mercaturae, 1998.

Komlos, John. “Anthropometric History: What Is It?” Magazine of History (Spring 1992): 3-5.

Komlos, John. Stature, Living Standards, and Economic Development: Essays in Anthropometric History. Chicago: University of Chicago Press, 1994.

Komlos, John. The Biological Standard of Living in Europe and America 1700-1900: Studies in Anthropometric History. Aldershot: Variorum Press, 1995.

Komlos, John. The Biological Standard of Living on Three Continents: Further Essays in Anthropometric History. Boulder: Westview Press, 1995.

Steckel, Richard H., and J.C. Rose. The Backbone of History: Health and Nutrition in the Western Hemisphere. New York: Cambridge University Press, 2002.

Steckel, Richard H., and Roderick Floud. Health and Welfare during Industrialization. Chicago: University of Chicago Press, 1997.

Steckel, Richard. “Height, Living Standards, and History.” Historical Methods 24 (1991): 183-87.

Steckel, Richard. “Stature and Living Standards in the United States.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John J. Wallis, 265-310. Chicago: University of Chicago Press, 1992.

Steckel, Richard. “Stature and the Standard of Living.” Journal of Economic Literature 33 (1995): 1903-40.

Steckel, Richard. “A History of the Standard of Living in the United States.” In EH.Net Encyclopedia, edited by Robert Whaples, http://www.eh.net/encyclopedia/contents/steckel.standard.living.us.php

Seminal Articles in Historical Anthropometrics

Aron, Jean-Paul, Paul Dumont, and Emmanuel Le Roy Ladurie. Anthropologie du Conscrit Francais. Paris: Mouton, 1972.

Eltis, David. “Nutritional Trends in Africa and the Americas: Heights of Africans, 1819-1839.” Journal of Interdisciplinary History 12 (1982): 453-75.

Engerman, Stanley. “The Height of U.S. Slaves.” Local Population Studies 16 (1976): 45-50.

Floud, Roderick and Kenneth Wachter. “Poverty and Physical Stature, Evidence on the Standard of Living of London Boys 1770-1870.” Social Science History 6 (1982): 422-52.

Fogel, Robert W. “Physical Growth as a Measure of the Economic Well-being of Populations: The Eighteenth and Nineteenth Centuries.” In Human Growth: A Comprehensive Treatise, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 263-281. New York: Plenum, 1986.

Fogel, Robert W., Stanley Engerman, Roderick Floud, Gerald Friedman, Robert Margo, Kenneth Sokoloff, Richard Steckel, James Trussell, Georgia Villaflor and Kenneth Wachter. “Secular Changes in American and British Stature and Nutrition.” Journal of Interdisciplinary History 14 (1983): 445-81.

Fogel, Robert W., Stanley L. Engerman, and James Trussell. “Exploring the Uses of Data on Height: The Analysis of Long-Term Trends in Nutrition, Labor Welfare, and Labor Productivity.” Social Science History 6 (1982): 401-21.

Friedman, Gerald C. “The Heights of Slaves in Trinidad.” Social Science History 6 (1982): 482-515.

Higman, Barry W. “Growth in Afro-Caribbean Slave Populations.” American Journal of Physical Anthropology 50 (1979): 373-85.

Komlos, John. “The Height and Weight of West Point Cadets: Dietary Change in Antebellum America.” Journal of Economic History 47 (1987): 897-927.

Le Roy Ladurie, Emmanuel, N. Bernageau, and Y. Pasquet. “Le Conscrit et l’ordinateur: Perspectives de recherches sur les Archives Militaries du XIXieme siecle Francais.” Studi Storici 10 (1969): 260-308.

Le Roy Ladurie, Emmanuel. “The Conscripts of 1868: A Study of the Correlation between Geographical Mobility, Delinquency and Physical Stature and Other Aspects of the Situation of the Young Frenchmen Called to Do Military Service That Year.” In The Territory of the Historian. Translated by Ben and Sian Reynolds. Chicago: University of Chicago Press, 1979.

Margo, Robert and Richard Steckel. “Heights of Native Born Whites during the Antebellum Period.” Journal of Economic History 43 (1983): 167-74.

Margo, Robert and Richard Steckel. “The Height of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-38.

Steckel, Richard. “Height and per Capita Income.” Historical Methods 16 (1983): 1-7.

Steckel, Richard. “Slave Height Profiles from Coastwise Manifests.” Explorations in Economic History 16 (1979): 363-80.

Articles Addressing Methodological Issues

Heintel, Markus, Lars Sandberg and Richard Steckel. “Swedish Historical Heights Revisited: New Estimation Techniques and Results.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 449-58. Stuttgart: Franz Steiner, 1998.

Komlos, John, and Joo Han Kim. “Estimating Trends in Historical Heights.” Historical Methods 23 (1900): 116-20.

Riley, James C. “Height, Nutrition, and Mortality Risk Reconsidered.” Journal of Interdisciplinary History 24 (1994): 465-92.

Steckel, Richard. “Percentiles of Modern Height: Standards for Use in Historical Research.’ Historical Methods 29 (1996): 157-66.

Wachter, Kenneth, and James Trussell. “Estimating Historical Heights.” Journal of the American Statistical Association 77 (1982): 279-303.

Wachter, Kenneth. “Graphical Estimation of Military Heights.” Historical Methods 14 (1981): 31-42.

Publications Providing Bio-Medical Background for Historical Anthropometrics

Bielecki, T. “Physical Growth as a Measure of the Economic Well-being of Populations: The Twentieth Century.” In Human Growth, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 283-305. New York: Plenum, 1986.

Bogin, Barry. Patterns of Human Growth. Cambridge: Cambridge University Press, 1988.

Eveleth, Phyllis B. “Population Differences in Growth: Environmental and Genetic Factors.” In Human Growth: A Comprehensive Treatise, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 221-39. New York: Plenum, 1986.

Eveleth, Phyllis B. and James M. Tanner. Worldwide Variation in Human Growth. Cambridge: Cambridge University Press, 1976.

Tanner, James M. “Growth as a Target-Seeking Function: Catch-up and Catch-down Growth in Man.” In Human Growth: A Comprehensive Treatise, second edition, volume 1, edited by F. Falkner and J.M. Tanner, 167-80. New York: Plenum, 1986.

Tanner, James M. “The Potential of Auxological Data for Monitoring Economic and Social Well-Being.” Social Science History 6 (1982): 571-81.

Tanner, James M. A History of the Study of Human Growth. Cambridge: Cambridge University Press, 1981.

World Health Organization. “Use and Interpretation of Anthropometric Indicators of Nutritional Status.” Bulletin of the World Health Organization 64 (1986): 929-41.

Predecessors to Historical Anthropometrics

Bowles, G. T. New Types of Old Americans at Harvard and at Eastern Women’s Colleges. Cambridge, MA: Harvard University Press, 1952.

Damon, Albert. “Secular Trend in Height and Weight within Old American Families at Harvard, 1870-1965.” American Journal of Physical Anthropology 29 (1968): 45-50.

Damon, Albert. “Stature Increase among Italian-Americans: Environmental, Genetic, or Both?” American Journal of Physical Anthropology 23 (1965) 401-08.

Gould, Benjamin A. Investigations in the Military and Anthropological Statistics of American Soldiers. New York: Hurd and Houghton [for the U.S. Sanitary Commission], 1869.

Karpinos, Bernard D. “Height and Weight of Selective Service Registrants Processed for Military Service during World War II.” Human Biology 40 (1958): 292-321.

Publications Focused on Nonstature-Based Anthropometric Measures

Brudevoll, J.E., K. Liestol, and L. Walloe. “Menarcheal Age in Oslo during the Last 140 Years.” Annals of Human Biology 6 (1979): 407-16.

Cuff, Timothy. “The Body Mass Index Values of Nineteenth Century West Point Cadets: A Theoretical Application of Waaler’s Curves to a Historical Population.” Historical Methods 26 (1993): 171-83.

Komlos, John. “The Age at Menarche in Vienna.” Historical Methods 22 (1989): 158-63.

James M. Tanner. “Trend towards Earlier Menarche in London, Oslo, Copenhagen, the Netherlands, and Hungary.” Nature 243 (1973): 95-96.

Trussell, James, and Richard Steckel. “The Age of Slaves at Menarche and Their First Birth.” Journal of Interdisciplinary History 8 (1978): 477-505.

Waaler, Hans Th. “Height, Weight, and Mortality: The Norwegian Experience.” Acta Medica Scandinavica, supplement 679, 1984.

Ward, W. Peter, and Patricia C. Ward. “Infant Birth Weight and Nutrition in Industrializing Montreal.” American Historical Review 89 (1984): 324-45.

Ward, W. Peter. Birth Weight and Economic Growth: Women’s Living Standards in the Industrializing West. Chicago: University of Chicago Press, 1993.

Articles with a Non-western Geographic Focus

Cameron, Noel. “Physical Growth in a Transitional Economy: The Aftermath of South African Apartheid.” Economic and Human Biology 1 (2003): 29-42.

Eltis, David. ‘Welfare Trends among the Yoruba in the Early Nineteenth Century: The Anthropometric Evidence.” Journal of Economic History 50 (1990): 521-40.

Greulich, W.W. “Some Secular Changes in the Growth of American-born and Native Japanese Children.” American Journal of Physical Anthropology 45 (1976): 553-68.

Morgan, Stephen. “Biological Indicators of Change in the Standard of Living in China during the Twentieth Century.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 7-34. Struttart: Franz Steiner, 1998.

Nicholas, Stephen, Robert Gregory, and Sue Kimberley. “The Welfare of Indigenous and White Australians, 1890-1955.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 35-54. Stuttgart: Franz Steiner: 1998.

Salvatore, Ricardo D. “Stature, Nutrition, and Regional Convergence: The Argentine Northwest in the First Half of the Twentieth Century.” Social Science History 28 (2004): 297-324.

Shay, Ted. “The Level of Living in Japan, 1885-1938: New Evidence.’ In The Biological Standard of Living on Three Continents: Further Explorations in Anthropometric History, edited by John Komlos, 173-201. Boulder: Westview Press, 1995.

Articles with a North American Focus

Craig, Lee, and Thomas Weiss. “Nutritional Status and Agriculture Surpluses in antebellum United States.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 190-207. Stuttgart: Franz Steiner, 1998.

Komlos, John, and Peter Coclanis, “On the ‘Puzzling’ Antebellum Cycle of the Biological Standard of Living: The Case of Georgia,” Explorations in Economic History 34 (1997): 433-59.

Komlos, John. “Shrinking in a Growing Economy? The Mystery of Physical Stature during the Industrial Revolution,” Journal of Economic History 58 (1998): 779-802.

Komlos, John. “Toward an Anthropometric History of African-Americans: The Case of the Free Blacks in Antebellum Maryland.” In Strategic Factors in Nineteenth Century American Economic History: A Volume to Honor Robert W. Fogel, edited by Claudia Goldin and Hugh Rockoff, 267-329. Chicago: University of Chicago Press, 1992.

Murray, John. “Standards of the Present for People of the Past: Height, Weight, and Mortality among Men of Amherst College, 1834-1949.” Journal of Economic History 57 (1997): 585-606.

Murray, John. “Stature among Members of a Nineteenth Century American Shaker Commune.” Annals of Human Biology 20 (1993): 121-29.

Steckel, Richard. “A Peculiar Population: The Nutrition, Health, and Mortality of American Slaves from Childhood to Maturity.” Journal of Economic History 46 (1986): 721-41.

Steckel, Richard. “Health and Nutrition in the American Midwest: Evidence from the Height of Ohio National Guardsmen, 1850-1910.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 153-70. Chicago: University of Chicago Press, 1994.

Steckel, Richard. “The Health and Mortality of Women and Children.” Journal of Economic History 48 (1988): 333-45.

Steegmann, A. Theodore Jr. “18th Century British Military Stature: Growth Cessation, Selective Recruiting, Secular Trends, Nutrition at Birth, Cold and Occupation.” Human Biology 57 (1985): 77-95.

Articles with a European Focus

Baten, Jörg. “Economic Development and the Distribution of Nutritional Resources in Bavaria, 1797-1839.” Journal of Income Distribution 9 (2000): 89-106.

Baten, Jörg. “Climate, Grain production, and Nutritional Status in Southern Germany during the XVIIIth Century.” Journal of European Economic History 30 (2001): 9-47.

Baten, Jörg and John Murray “Heights of Men and Women in the Nineteenth-century Bavaria: Economic, Nutritional, and Disease Influences.” Explorations in Economic History 37 (2000): 351-69.

Komlos, John. “Stature and Nutrition in the Habsburg Monarchy: The Standard of Living and Economic Development in the Eighteenth Century.” American Historical Review 90 (1985): 1149-61.

Komlos, John. “The Nutritional Status of French Students.” Journal of Interdisciplinary History 24 (1994): 493-508.

Komlos, John. “The Secular Trend in the Biological Standard of Living in the United Kingdom, 1730-1860.” Economic History Review 46 (1993): 115-44.

Nicholas, Stephen and Deborah Oxley. “The Living Standards of Women during the Industrial Revolution, 1795-1820.” Economic History Review 46 (1993): 723-49.

Nicholas, Stephen and Richard Steckel. “Heights and Living Standards of English Workers during the Early Years of Industrialization, 1770-1815.” Journal of Economic History 51 (1991): 937-57.

Oxley, Deborah. “Living Standards of Women in Prefamine Ireland.” Social Science History 28 (2004): 271-95.

Riggs, Paul. “The Standard of Living in Scotland, 1800-1850.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 60-75. Chicago: University of Chicago Press: 1994.

Sandberg, Lars G. “Soldier, Soldier, What Made You Grow So Tall? A Study of Height, Health and Nutrition in Sweden, 1720-1881.” Economy and History 23 (1980): 91-105.

Steckel, Richard H. “New Light on the ‘Dark Ages’: The Remarkably Tall Stature of Northern European Men during the Medieval Era.” Social Science History 28 (2004): 211-30.

Citation: Cuff, Timothy. “Historical Anthropometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 29, 2004. URL http://eh.net/encyclopedia/historical-anthropometrics/

Attachment Size
Cuff.Anthropometrics.doc 169.5 KB

African Americans in the Twentieth Century

Thomas N. Maloney, University of Utah

The nineteenth century was a time of radical transformation in the political and legal status of African Americans. Blacks were freed from slavery and began to enjoy greater rights as citizens (though full recognition of their rights remained a long way off). Despite these dramatic developments, many economic and demographic characteristics of African Americans at the end of the nineteenth century were not that different from what they had been in the mid-1800s. Tables 1 and 2 present characteristics of black and white Americans in 1900, as recorded in the Census for that year. (The 1900 Census did not record information on years of schooling or on income, so these important variables are left out of these tables, though they will be examined below.) According to the Census, ninety percent of African Americans still lived in the Southern US in 1900 — roughly the same percentage as lived in the South in 1870. Three-quarters of black households were located in rural places. Only about one-fifth of African American household heads owned their own homes (less than half the percentage among whites). About half of black men and about thirty-five percent of black women who reported an occupation to the Census said that they worked as a farmer or a farm laborer, as opposed to about one-third of white men and about eight percent of white women. Outside of farm work, African American men and women were greatly concentrated in unskilled labor and service jobs. Most black children had not attended school in the year before the Census, and white children were much more likely to have attended. So the members of a typical African American family at the start of the twentieth century lived and worked on a farm in the South and did not own their home. Children in these families were unlikely to be in school even at very young ages.

By 1990 (the most recent Census for which such statistics are available at the time of this writing), the economic conditions of African Americans had changed dramatically (see Tables 1 and 2). They had become much less concentrated in the South, in rural places, and in farming jobs and had entered better blue-collar jobs and the white-collar sector. They were nearly twice as likely to own their own homes at the end of the century as in 1900, and their rates of school attendance at all ages had risen sharply. Even after this century of change, though, African Americans were still relatively disadvantaged in terms of education, labor market success, and home ownership.

Table 1: Characteristics of Households in 1900 and 1990

1900 1990
Black White Black White
A. Region of Residence
South 90.1% 23.5% 53.0% 32.9%
Northeast 3.6% 31.8% 18.9% 20.9%
Midwest 5.8% 38.5% 18.9% 25.3%
West 0.5% 6.2% 9.2% 21.0%
B. Share Rural
75.8% 56.1% 11.9% 25.7%
C. Share of Homes Owner-Occupied
22.1% 49.2% 43.4% 67.3%

Based on household heads in Integrated Public Use Microdata Series Census samples for 1900 and 1990.

Table 2: Characteristics of Individuals in 1900 and 1990

1900 1990
Male Female Male Female
Black White Black White Black White Black White
A. Occupational Distribution
Professional/Technical 1.3% 3.8% 1.6% 10.7% 9.9% 17.2% 16.6% 21.9%
Proprietor/Manager/Official 0.8 6.9 0.2 2.6 6.5 14.7 5.4 10.0
Clerical 0.2 4.0 0.2 5.6 10.7 7.2 29.7 31.9
Sales 0.3 4.2 0.2 4.1 2.9 6.7 4.1 7.3
Craft 4.2 15.9 0 3.1 17.4 20.7 2.3 2.1
Operative 7.3 13.4 1.8 24.5 20.7 14.9 12.4 8.0
Laborer 25.5 14.0 6.5 1.5 12.2 7.2 2.0 1.5
Private Service 2.2 0.4 33.0 33.2 0.1 0 2.0 0.8
Other Service 4.8 2.4 20.6 6.6 18.5 9.0 25.3 15.8
Farmer 30.8 23.9 6.7 6.1 0.2 1.4 0.1 0.4
Farm Laborer 22.7 11.0 29.4 2.0 1.0 1.0 0.4 0.5
B. Percent Attending School by Age
Ages 6 to 13 37.8% 72.2% 41.9% 71.9% 94.5% 95.3% 94.2% 95.5
Ages 14 to 17 26.7 47.9 36.2 51.5 91.1 93.4 92.6 93.5
Ages 18 to 21 6.8 10.4 5.9 8.6 47.7 54.3 52.9 57.1

Based on Integrated Public Use Microdata Series Census samples for 1900 and 1990. Occupational distributions based on individuals aged 18 to 64 with recorded occupation. School attendance in 1900 refers to attendance at any time in the previous year. School attendance in 1990 refers to attendance since February 1 of that year.

These changes in the lives of African Americans did not occur continuously and steadily throughout the twentieth century. Rather, we can divide the century into three distinct eras: (1) the years from 1900 to 1915, prior to large-scale movement out of the South; (2) the years from 1916 to 1964, marked by migration and urbanization, but prior to the most important government efforts to reduce racial inequality; and (3) the years since 1965, characterized by government antidiscrimination efforts but also by economic shifts which have had a great impact on racial inequality and African American economic status.

1900-1915: Continuation of Nineteenth-Century Patterns

As was the case in the 1800s, African American economic life in the early 1900s centered on Southern cotton agriculture. African Americans grew cotton under a variety of contracts and institutional arrangements. Some were laborers hired for a short period for specific tasks. Many were tenant farmers, renting a piece of land and some of their tools and supplies, and paying the rent at the end of the growing season with a portion of their harvest. Records from Southern farms indicate that white and black farm laborers were paid similar wages, and that white and black tenant farmers worked under similar contracts for similar rental rates. Whites in general, however, were much more likely to own land. A similar pattern is found in Southern manufacturing in these years. Among the fairly small number of individuals employed in manufacturing in the South, white and black workers were often paid comparable wages if they worked at the same job for the same company. However, blacks were much less likely to hold better-paying skilled jobs, and they were more likely to work for lower-paying companies.

While the concentration of African Americans in cotton agriculture persisted, Southern black life changed in other ways in the early 1900s. Limitations on the legal rights of African Americans grew more severe in the South in this era. The 1896 Supreme Court decision in the case of Plessy v. Ferguson provided a legal basis for greater explicit segregation in American society. This decision allowed for the provision of separate facilities and services to blacks and whites as long as the facilities and services were equal. Through the early 1900s, many new laws, known as Jim Crow laws, were passed in Southern states creating legally segregated schools, transportation systems, and lodging. The requirement of equality was not generally enforced, however. Perhaps the most important and best-known example of separate and unequal facilities in the South was the system of public education. Through the first decades of the twentieth century, resources were funneled to white schools, raising teacher salaries and per-pupil funding while reducing class size. Black schools experienced no real improvements of this type. The result was a sharp decline in the relative quality of schooling available to African-American children.

1916-1964: Migration and Urbanization

The mid-1910s witnessed the first large-scale movement of African Americans out of the South. The share of African Americans living in the South fell by about four percentage points between 1910 and 1920 (with nearly all of this movement after 1915) and another six points between 1920 and 1930 (see Table 3). What caused this tremendous relocation of African Americans? The worsening political and social conditions in the South, noted above, certainly played a role. But the specific timing of the migration appears to be connected to economic factors. Northern employers in many industries faced strong demand for their products and so had a great need for labor. Their traditional source of cheap labor, European immigrants, dried up in the late 1910s as the coming of World War I interrupted international migration. After the end of the war, new laws limiting immigration to the US would keep the flow of European labor at a low level. Northern employers thus needed a new source of cheap labor, and they turned to Southern blacks. In some cases, employers would send recruiters to the South to find workers and to pay their way North. In addition to this pull from the North, economic events in the South served to push out many African Americans. Destruction of the cotton crop by the boll weevil, an insect that feeds on cotton plants, and poor weather in some places during these years made new opportunities in the North even more attractive.

Table 3: Share of African Americans Residing in the South

Year Share Living in South
1890 90%
1900 90%
1910 89%
1920 85%
1930 79%
1940 77%
1950 68%
1960 60%
1970 53%
1980 53%
1990 53%

Sources: 1890 to 1960: Historical Statistics of the United States, volume 1, pp. 22-23; 1970: Statistical Abstract of the United States, 1973, p. 27; 1980: Statistical Abstract of the United States, 1985, p. 31; 1990: Statistical Abstract of the United States, 1996, p. 31.

Pay was certainly better, and opportunities were wider, in the North. Nonetheless, the region was not entirely welcoming to these migrants. As the black population in the North grew in the 1910s and 1920s, residential segregation grew more pronounced, as did school segregation. In some cases, racial tensions boiled over into deadly violence. The late 1910s were scarred by severe race riots in a number of cities, including East St. Louis (1917) and Chicago (1919).

Access to Jobs in the North

Within the context of this broader turmoil, black migrants did gain entry to new jobs in Northern manufacturing. As in Southern manufacturing, pay differences between blacks and whites working the same job at the same plant were generally small. However, black workers had access to a limited set of jobs and remained heavily concentrated in unskilled laborer positions. Black workers gained admittance to only a limited set of firms, as well. For instance, in the auto industry, the Ford Motor Company hired a tremendous number of black workers, while other auto makers in Detroit typically excluded these workers. Because their alternatives were limited, black workers could be worked very intensely and could also be used in particularly unpleasant and dangerous settings, such as the killing and cutting areas of meat packing plants, foundry departments in auto plants, and blast furnaces in steel plants.

Unions

Through the 1910s and 1920s, relations between black workers and Northern labor unions were often antagonistic. Many unions in the North had explicit rules barring membership by black workers. When faced with a strike (or the threat of a strike), employers often hired in black workers, knowing that these workers were unlikely to become members of the union or to be sympathetic to its goals. Indeed, there is evidence that black workers were used as strike breakers in a great number of labor disputes in the North in the 1910s and 1920s. Beginning in the mid-1930s, African Americans gained greater inclusion in the union movement. By that point, it was clear that black workers were entrenched in manufacturing, and that any broad-based organizing effort would have to include them.

Conditions around 1940

As is apparent in Table 3, black migration slowed in the 1930s, due to the onset of the Great Depression and the resulting high level of unemployment in the North in the 1930s. Beginning in about 1940, preparations for war again created tight labor markets in Northern cities, though, and, as in the late 1910s, African Americans journeyed north to take advantage of new opportunities. In some ways, moving to the North in the 1940s may have appeared less risky than it had during the World War I era. By 1940, there were large black communities in a number of Northern cities. Newspapers produced by these communities circulated in the South, providing information about housing, jobs, and social conditions. Many Southern African Americans now had friends and relatives in the North to help with the transition.

In other ways, though, labor market conditions were less auspicious for black workers in 1940 than they had been during the World War I years. Unemployment remained high in 1940, with about fourteen percent of white workers either unemployed or participating in government work relief programs. Employers hired these unemployed whites before turning to African American labor. Even as labor markets tightened, black workers gained little access to war-related employment. The President issued orders in 1941 that companies doing war-related work had to hire in a non-discriminatory way, and the Fair Employment Practice Committee was created to monitor the hiring practices of these companies. Initially, few resources were devoted to this effort, but in 1943 the government began to enforce fair employment policies more aggressively. These efforts appear to have aided black employment, at least for the duration of the war.

Gains during the 1940s and 1950s

In 1940, the Census Bureau began to collect data on individual incomes, so we can track changes in black income levels and in black/white income ratios in more detail from this date forward. Table 4 provides annual earnings figures for black and white men and women from 1939 (recorded in the 1940 Census) to 1989 (recorded in the 1990 Census). The big gains of the 1940s, both in level of earnings and in the black/white income ratio, are very obvious. Often, we focus on the role of education in producing higher earnings, but the gap between average schooling levels for blacks and whites did not change much in the 1940s (particularly for men), so schooling levels could not have contributed too much to the relative income gains for blacks in the 1940s (see Table 5). Rather, much of the improvement in the black/white pay ratio in this decade simply reflects ongoing migration: blacks were leaving the South, a low-wage region, and entering the North, a high-wage region. Some of the improvement reflects access to new jobs and industries for black workers, due to the tight labor markets and antidiscrimination efforts of the war years.

Table 4: Mean Annual Earnings of Wage and Salary Workers

Aged 20 and Over

Male

Female

Black White Ratio Black White Ratio
1939 $537.45 $1234.41 .44 $331.32 $771.69 .43
1949 1761.06 2984.96 .59 992.35 1781.96 .56
1959 2848.67 5157.65 .55 1412.16 2371.80 .59
1969 5341.64 8442.37 .63 3205.12 3786.45 .85
1979 11404.46 16703.67 .68 7810.66 7893.76 .99
1989 19417.03 28894.69 .67 15319.29 16135.65 .95

Source: Integrated Public Use Microdata Series Census samples for 1940, 1950, 1960, 1970, 1980, and 1990. Includes only those with non-zero earnings who were not in school. All figures are in current (nominal) dollars.

Table 5: Years of School Attended for Individuals 20 and Over

Male

Female

Black White Difference Black White Difference
1940 5.9 9.1 3.2 6.9 10.5 3.6
1950 6.8 9.8 3 7.8 10.8 3
1960 7.9 10.5 2.6 8.8 11.0 2.2
1970 9.4 11.4 2.0 10.3 11.7 1.4
1980 11.2 12.5 1.3 11.8 12.4 0.6

Source: Integrated Public Use Microdata Series Census samples for 1940, 1950, 1960, 1970, and 1980. Based on highest grade attended by wage and salary workers aged 20 and over who had non-zero earnings in the previous year and who were not in school at the time of the census. Comparable figures are not available in the 1990 Census.

Black workers relative incomes were also increased by some general changes in labor demand and supply and in labor market policy in the 1940s. During the war, demand for labor was particularly strong in the blue-collar manufacturing sector. Workers were needed to build tanks, jeeps, and planes, and these jobs did not require a great deal of formal education or skill. In addition, the minimum wage was raised in 1945, and wartime regulations allowed greater pay increases for low-paid workers than for highly-paid workers. After the war, the supply of college-educated workers increased dramatically. The GI Bill, passed in 1944, provided large subsidies to help pay the expenses of World War II veterans who wanted to attend college. This policy helped a generation of men further their education and get a college degree. So strong labor demand, government policies that raised wages at the bottom, and a rising supply of well-educated workers meant that less-educated, less-skilled workers received particularly large wage increases in the 1940s. Because African Americans were concentrated among the less-educated, low-earning workers, these general economic forces were especially helpful to African Americans and served to raise their pay relative to that of whites.

The effect of these broader forces on racial inequality helps to explain the contrast between the 1940s and 1950s evident in Table 4. The black-white pay ratio may have actually fallen a bit for men in the 1950s, and it rose much more slowly in the 1950s than in the 1940s for women. Some of this slowdown in progress reflects weaker labor markets in general, which reduced black access to new jobs. In addition, the general narrowing of the wage distribution that occurred in the 1940s stopped in the 1950s. Less-educated, lower-paid workers were no longer getting particularly large pay increases. As a result, blacks did not gain ground on white workers. It is striking that pay gains for black workers slowed in the 1950s despite a more rapid decline in the black-white schooling gap during these years (Table 5).

Unemployment

On the whole, migration and entry to new industries played a large role in promoting black relative pay increases through the years from World War I to the late 1950s. However, these changes also had some negative effects on black labor market outcomes. As black workers left Southern agriculture, their relative rate of unemployment rose. For the nation as a whole, black and white unemployment rates were about equal as late as 1930. This equality was to a great extent the result of lower rates of unemployment for everyone in the rural South relative to the urban North. Farm owners and sharecroppers tended not to lose their work entirely during weak markets, whereas manufacturing employees might be laid off or fired during downturns. Still, while unemployment was greater for everyone in the urban North, it was disproportionately greater for black workers. Their unemployment rates in Northern cities were much higher than white unemployment rates in the same cities. One result of black migration, then, was a dramatic increase in the ratio of black unemployment to white unemployment. The black/white unemployment ratio rose from about 1 in 1930 (indicating equal unemployment rates for blacks and whites) to about 2 by 1960. The ratio remained at this high level through the end of the twentieth century.

1965-1999: Civil Rights and New Challenges

In the 1960s, black workers again began to experience more rapid increases in relative pay levels (see Table 4). These years also marked a new era in government involvement in the labor market, particularly with regard to racial inequality and discrimination. One of the most far-reaching changes in government policy regarding race actually occurred a bit earlier, in the 1954 Supreme Court decision in the case of Brown v. the Board of Education of Topeka, Kansas. In that case, the Supreme Court ruled that racial segregation of schools was unconstitutional. However, substantial desegregation of Southern schools (and some Northern schools) would not take place until the late 1960s and early 1970s.

School desegregation, therefore, was probably not a primary force in generating the relative pay gains of the 1960s and 1970s. Other anti-discrimination policies enacted in the mid-1960s did play a large role, however. The Civil Rights Act of 1964 outlawed discrimination in a broad set of social arenas. Title VII of this law banned discrimination in hiring, firing, pay, promotion, and working conditions and created the Equal Employment Opportunity Commission to investigate complaints of workplace discrimination. A second policy, Executive Order 11246 (issued by President Johnson in 1965), set up more stringent anti-discrimination rules for businesses working on government contracts. There has been much debate regarding the importance of these policies in promoting better jobs and wages for African Americans. There is now increasing agreement that these policies had positive effects on labor market outcomes for black workers at least through the mid-1970s. Several pieces of evidence point to this conclusion. First, the timing is right. Many indicators of employment and wage gains show marked improvement beginning in 1965, soon after the implementation of these policies. Second, job and wage gains for black workers in the 1960s were, for the first time, concentrated in the South. Enforcement of anti-discrimination policy was targeted on the South in this era. It is also worth noting that rates of black migration out of the South dropped substantially after 1965, perhaps reflecting a sense of greater opportunity there due to these policies. Finally, these gains for black workers occurred simultaneously in many industries and many places, under a variety of labor market conditions. Whatever generated these improvements had to come into effect broadly at one point in time. Federal antidiscrimination policy fits this description.

Return to Stagnation in Relative Income

The years from 1979 to 1989 saw the return of stagnation in black relative incomes. Part of this stagnation may reflect the reversal of the shifts in wage distribution that occurred during the 1940s. In the late 1970s and especially in the 1980s, the US wage distribution grew more unequal. Individuals with less education, particularly those with no college education, saw their pay decline relative to the better-educated. Workers in blue-collar manufacturing jobs were particularly hard hit. The concentration of black workers, especially black men, in these categories meant that their pay suffered relative to that of whites. Another possible factor in the stagnation of black relative pay in the 1980s was weakened enforcement of antidiscrimination policies at this time.

While black relative incomes stagnated on average, black residents of urban centers suffered particular hardships in the 1970s and 1980s. The loss of blue-collar manufacturing jobs was most severe in these areas. For a variety of reasons, including the introduction of new technologies that required larger plants, many firms relocated their production facilities outside of central cities, to suburbs and even more peripheral areas. Central cities increasingly became information-processing and financial centers. Jobs in these industries generally required a college degree or even more education. Despite decades of rising educational levels, African Americans were still barely half as likely as whites to have completed four years of college or more: in 1990, 11.3% of blacks over the age of 25 had four years of college or more, versus 22% of whites. As a result of these developments, many blacks in urban centers found themselves surrounded by jobs for which they were poorly qualified, and at some distance from the types of jobs for which they were qualified, the jobs their parents had moved to the city for in the first place. Their ability to relocate near these blue-collar jobs seems to have been limited both by ongoing discrimination in the housing market and by a lack of resources. Those African Americans with the resources to exit the central city often did so, leaving behind communities marked by extremely high rates of poverty and unemployment.

Over the fifty years from 1939 to 1989, through these episodes of gain and stagnation, the ratio of black mens average annual earnings to white mens average annual earnings rose about 23 points, from .44 to .67. The timing of improvement in the black female/ white female income ratio was similar. However, black women gained much more ground overall: the black-white income ratio for women rose 50 points over these fifty years and stood at .95 in 1989 (down from .99 in 1979). The education gap between black women and white women declined more than the education gap between black and white men, which contributed to the faster pace of improvement in black womens relative earnings. Furthermore, black female workers were more likely to be employed full-time than were white female workers, which raised their annual income. The reverse was true among men: white male workers were somewhat more likely to be employed full time than were black male workers.

Comparable data on annual incomes from the 2000 Census are not available at the time of this writing. Evidence from other labor market surveys suggests that the tight labor markets of the late 1990s may have brought renewed relative pay gains for black workers. Black workers also experienced sharp declines in unemployment during these years, though black unemployment remained about twice as great as white unemployment.

Beyond the Labor Market: Persistent Gaps in Wealth and Health

When we look beyond these basic measures of labor market success, we find more disturbingly large and persistent gaps between African Americans and white Americans. Wealth differences between blacks and whites continue to be very large. In the mid-1990s, black households held only about one-quarter the amount of wealth that white households held, on average. If we leave out equity in ones home and personal possessions and focus on more strictly financial, income-producing assets, black households held only about ten to fifteen percent as much wealth as white households. Big differences in wealth holding remain even if we compare black and white households with similar incomes.

Much of this wealth gap reflects the ongoing effects of the historical patterns described above. When freed from slavery, African Americans held no wealth, and their lower incomes prevented them from accumulating wealth at the rate whites did. African Americans found it particularly difficult to buy homes, traditionally a households most important asset, due to discrimination in real estate markets. Government housing policies in the 1930s and 1940s may have also reduced their rate of home-buying. While the federal government made low interest loans and loan insurance available through the Home Owners Loan Corporation and the Federal Housing Authority, these programs generally could not be used to acquire homes in black or mixed neighborhoods, usually the only neighborhoods in which blacks could buy, because these were considered to be areas of high-risk for loan default. Because wealth is passed on from parents to children, the wealth differences of the mid-twentieth century continue to have an important impact today.

Differences in life expectancy have also proven to be remarkably stubborn. Certainly, black and white mortality patterns are more similar today than they once were. In 1929, the first year for which national figures are available, white life expectancy at birth was 58.6 years and black life expectancy was 46.7 years (for men and women combined). By 2000, white life expectancy had risen to 77.4 years and black life expectancy was 71.8 years. Thus, the black-white gap had fallen from about twelve years to less than six. However, almost all of this reduction in the gap was completed by the early 1960s. In 1961, the black-white gap was 6.5 years. The past forty years have seen very little change in the gap, though life expectancy has risen for both groups.

Some of this remaining difference in life expectancy can be traced to income differences between blacks and whites. Black children face a particularly high risk of accidental death in the home, often due to dangerous conditions in low-quality housing. African Americans of all ages face a high risk of homicide, which is related in part to residence in poor neighborhoods. Among older people, African Americans face high risk of death due to heart disease, and the incidence of heart disease is correlated with income. Still, black-white mortality differences, especially those related to disease, are complex and are not yet fully understood.

Infant mortality is a particularly large and particularly troubling form of health difference between blacks and whites.

In 2000 the white infant mortality rate (5.7 per 1000 live births) was less than half the rate for African Americans (14.0 per 1000). Again, some of this mortality difference is related to the effect of lower incomes on the nutrition, medical care, and living conditions available to African American mothers and newborns. However, the full set of relevant factors is the subject of ongoing research.

Summary and Conclusions

It is undeniable that the economic fortunes of African Americans changed dramatically during the twentieth century. African Americans moved from tremendous concentration in Southern agriculture to much greater diversity in residence and occupation. Over the period in which income can be measured, there are large increases in black incomes in both relative and absolute terms. Schooling differentials between blacks and whites fell sharply, as well. When one looks beyond the starting and ending points, though, more complex realities present themselves. The progress that we observe grew out of periods of tremendous social upheaval, particularly during the world wars. It was shaped in part by conflict between black workers and white workers, and it coincided with growing residential segregation. It was not continuous and gradual. Rather, it was punctuated by periods of rapid gain and periods of stagnation. The rapid gains are attributable to actions on the part of black workers (especially migration), broad economic forces (especially tight labor markets and narrowing of the general wage distribution), and specific antidiscrimination policy initiatives (such as the Fair Employment Practice Committee in the 1940s and Title VII and contract compliance policy in the 1960s). Finally, we should note that this century of progress ended with considerable gaps remaining between African Americans and white Americans in terms of income, unemployment, wealth, and life expectancy.

Sources

Butler, Richard J., James J. Heckman, and Brook Payner. “The Impact of the Economy and the State on the Economic Status of Blacks: A Study of South Carolina.” In Markets in History: Economic Studies of the Past, edited by David W. Galenson, 52-96. New York: Cambridge University Press, 1989.

Collins, William J. “Race, Roosevelt, and Wartime Production: Fair Employment in World War II Labor Markets.” American Economic Review 91, no. 1 (2001): 272-86.

Conley, Dalton. Being Black, Living in the Red: Race, Wealth, and Social Policy in America. Berkeley, CA: University of California Press, 1999.

Donohue, John H. III, and James Heckman. “Continuous vs. Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Goldin, Claudia, and Robert A. Margo. “The Great Compression: The Wage Structure in the United States at Mid-Century.” Quarterly Journal of Economics 107, no. 1 (1992): 1-34.

Halcoussis, Dennis and Gary Anderson. “The Political Economy of Legal Segregation: Jim Crow and Racial Employment Patterns.” Economics and Politics 8, no. 1 (1996): 1-15.

Herbst, Alma. The Negro in the Slaughtering and Meat Packing Industry in Chicago. New York: Houghton Mifflin, 1932.

Higgs, Robert. Competition and Coercion: Blacks in the American Economy 1865-1914. New York: Cambridge University Press, 1977.

Jaynes, Gerald David and Robin M. Williams, Jr., editors. A Common Destiny: Blacks and American Society. Washington, DC: National Academy Press, 1989.

Johnson, Daniel M. and Rex R. Campbell. Black Migration in America: A Social Demographic History. Durham, NC: Duke University Press, 1981.

Juhn, Chinhui, Kevin M. Murphy, and Brooks Pierce. “Accounting for the Slowdown in Black-White Wage Convergence.” In Workers and Their Wages: Changing Patterns in the United States, edited by Marvin H. Kosters, 107-43. Washington, DC: AEI Press, 1991.

Kaminski, Robert, and Andrea Adams. Educational Attainment in the US: March 1991 and 1990 (Current Population Reports P20-462). Washington, DC: US Census Bureau, May 1992.

Kasarda, John D. Urban Industrial Transition and the Underclass. In The Ghetto Underclass: Social Science Perspectives, edited by William J. Wilson, 43-64. Newberry Park, CA: Russell Sage, 1993.

Kennedy, Louise V. The Negro Peasant Turns Cityward: The Effects of Recent Migrations to Northern Centers. New York: Columbia University Press, 1930.

Leonard, Jonathan S. “The Impact of Affirmative Action Regulation and Equal Employment Law on Black Employment.” Journal of Economic Perspectives 4, no. 4 (1990): 47-64.

Maloney, Thomas N. “Wage Compression and Wage Inequality between Black and White Males in the United States, 1940-1960.” Journal of Economic History 54, no. 2 (1994): 358-81.

Maloney, Thomas N. “Racial Segregation, Working Conditions, and Workers’ Health: Evidence from the A.M. Byers Company, 1916-1930.” Explorations in Economic History 35, no. 3 (1998): 272-295.

Maloney, Thomas N., and Warren C. Whatley. “Making the Effort: The Contours of Racial Discrimination in Detroit’s Labor Markets, 1920-1940.” Journal of Economic History 55, no. 3 (1995): 465-93.

Margo, Robert A. Race and Schooling in the South, 1880-1950. Chicago: University of Chicago Press, 1990.

Margo, Robert A. “Explaining Black-White Wage Convergence, 1940-1950.” Industrial and Labor Relations Review 48, no. 3 (1995): 470-81.

Marshall, Ray F. The Negro and Organized Labor. NY: John Wiley and Sons, 1965.

Minino, Arialdi M., and Betty L. Smith. “Deaths: Preliminary Data for 2000″ National Vital Statistics Reports 49, no. 12 (2001).

Oliver, Melvin L., and Thomas M. Shapiro. Race and Wealth. Review of Black Political Economy 17, no. 4 (1989): 5-25.

Ruggles, Steven, and Matthew Sobek. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Social Historical Research Laboratory, University of Minnesota, 1997.

Sugrue, Thomas J. The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit. NJ: Princeton University Press, 1996.

Sundstrom, William A. “Last Hired, First Fired? Unemployment and Urban Black Workers During the Great Depression.” Journal of Economic History 52, no. 2 (1992): 416-29.

United States Bureau of the Census. Statistical Abstract of the United States 1973 (94th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1973.

United States Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970. Washington, DC: Department of Commerce, Bureau of the Census, 1975.

United States Bureau of the Census. Statistical Abstract of the United States 1985 (105th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1985.

United States Bureau of the Census. Statistical Abstract of the United States 1996 (116th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1996.

Vedder, Richard K. and Lowell Gallaway. “Racial Differences in Unemployment in the United States, 1890-1980.” Journal of Economic History 52, no. 3 (1992): 696-702.

Whatley, Warren C. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17, no. 4 (1993): 525-58.

Wilson, William J. The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy. Chicago, IL: University of Chicago Press, 1987.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. NY: Basic Books, 1986.

Citation: Maloney, Thomas. “African Americans in the Twentieth Century”. EH.Net Encyclopedia, edited by Robert Whaples. January 14, 2002. URL http://eh.net/encyclopedia/african-americans-in-the-twentieth-century/

Manpower in Economic Growth: The American Record since 1800

Author(s):Lebergott, Stanley
Reviewer(s):Margo, Robert A.

Classic Reviews in Economic History

Stanley Lebergott, Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964. xii + 561 pp.

Review Essay by Robert A. Margo, Department of Economics, Boston University.

Manpower after Forty Years

During the first half of the twentieth century classical musicians routinely incorporated their personalities into their performances. One recognizes immediately Schnabel in Beethoven, Fisher in Bach, Cortot in Chopin, or Segovia in just about anything written for guitar. As the century progressed performance practice evolved to where the “text” — the music — became paramount. The ideal was to reveal the composer’s intent rather than putting one’s own stamp on the notes — the performer as conduit per se rather than co-composer.

Personal style played a major role in the early years of the cliometrics revolution. Hand a cliometrician an unpublished essay by Robert Fogel or Stanley Engerman, and I am quite sure she could identify the author after reading the first couple of paragraphs (if not the first couple of sentences). No one can possibly mistake a book by Doug North for a book by Peter Temin or an essay by Paul David for one by Lance Davis or Jeffrey Williamson. To some extent this is because personal style mattered at the time in economics generally — think Milton Friedman or Robert Solow. But mostly it mattered, I think, because these cliometricians were on a mission. Men and women on a mission put their personalities up front, because they are trying to shake up the status quo.

So it is with Stanley Lebergott. Indeed, of all the personalities who figured in the transformation of economic history from a sub-field of economics (I am tempted to write “intellectual backwater”) that eschewed advances in economic theory and econometrics to one that embraced them (I am tempted to write “for better and for worse”), Lebergott’s style was perhaps the most personal. In re-reading Lebergott’s most famous book — his Manpower in Economic Growth: The American Record since 1800 (1964) — one sees that style front and center on nearly every page, as well as the conflicting emotions as its author tried, not always successfully, to marry the anecdotal and archival snippets beloved by historians with the methods of economics. Manpower was (and is) substantively important for two reasons. First, prior to Manpower, the “economic history of labor” meant unions and labor legislation. By contrast, Lebergott made the labor market — the demand and supply of labor — his central focus and in doing so elevated markets and market forces to a central tendency in the writing of economic history. Second, Lebergott produced absolutely fundamental data — estimates of the labor force, industrial composition, unemployment, real wages, self-employment, and the like — that economic historians have relied on (or embellished) ever since.

These two accomplishments aside, I emphasize style not because, in Manpower‘s case, it is light years from the average article that I accept for publication in Explorations in Economic History. Economic history, like all economics, is vastly more technical than it was in the early 1960s. Burrowing into the style of Manpower reveals an author transfixed with what he perceived to be the grandness of the American experiment, the transformation of a second-rate colony into the greatest economy the world had yet seen. The core of Manpower would always be its 33 appendix tables and 252 (!) pages of accompanying explanatory text lovingly produced and so relentlessly documented as to drive any reader to distraction (or tears). So much the line in the sand, daring — indeed, taunting — the reader to do better. Lebergott knew that, in principle, one could do better, because he did not have ready access to all the relevant archival materials. I would conjecture, however, that he would always be surprised if anyone did, in fact, do better. Tom Weiss, himself one of the great compilers of American economic statistics, spent several years redoing Lebergott’s labor force estimates using census micro data rather than the published volumes that Lebergott relied on (Weiss 1986). In commenting on Weiss’s work, Lebergott (1986) characterized the differences between his original figures and the revisions as “very small beer” and then took Weiss to task for failing (in Lebergott’s) view to fully justify the revisions. “One awaits with interest,” he concluded, “further work by the National Bureau of Economic Research project of which this is a part.” When Georgia Villaflor and I (Margo and Villaflor 1987) produced a series of real wage estimates for the antebellum period drawing on archival sources that Lebergott did not use, I received a polite letter congratulating me but requesting more details and admonishing me to think harder about certain estimates that Lebergott felt did not mesh fully with his priors. There are thousands of numbers in those 33 appendix tables and one’s sense is that each number received the undivided attention of its creator for many, many, many hours.

But numbers do not a narrative make. Chapter One, “The Matrix,” has little in common with the archetypal introduction that gives the reader a roadmap and a flavor of the findings. It begins rather with an 1802 quote from “The Reverend Stanley Griswold” about the frontier that lay before the good minister. “This good land, which stretches around us to such a vast extent … large like the munificence of heaven … [s]uch a noble present never before was given to any people.” (Reviewer’s note: any people? Which people?) The first sentence goes on to describe an incongruous scene from Kentucky in 1832, “a petit bon homme” and his wife and their “little pile of trunks” sitting in a restaurant in the middle of (literally) nowhere. We then learn of a “great theme” of American history, that which motivated those who wrested the land from the “wilderness” — a belief in an open society, of which there were three elements. First, “hope” — an unabashed belief that things will always get better, and were better in America than in Europe. Second, “ignorance” — Americans were always willing to try something new, no matter how crazy. Third, America had a huge amount of space for people to spread out in. OK, the reader says, but where’s the economics? Ca. page 13 Lebergott emphasizes that the three elements made Americans unusually restless people, willing to move all the time. Ordinarily, Lebergott opines, it is the smaller (geographically-speaking) countries that have higher labor productivity because, ordinarily, people do not like to move. But Americans liked to move, he claims, and they did so on the slightest provocation. Excessive optimism, misinformation, and folly are core attributes of the American spirit and key factors in the American success story. In the end, the errors didn’t matter anyway (“small beer” indeed) because the land was so rich. More people moved to California in 1850 than could be rationally justified by the expected returns to gold mining but, as a result, California entered the aggregate production function sooner than otherwise. Labor mobility per se was a Good Thing, and American had it in abundance.

Chapter Two asks where all the workers would come from. Lebergott notes that certain labor supplies were highly predictable — slaves, for example. But once the slave trade was abolished the supply of slave labor grew at whatever the natural rate of increase. If the riches of America were to be tapped, free labor would have to be found — all the more difficult if the required number of workers to be assembled in any given spot was very large.

Another element of the Lebergott style is a dry wit, as evidenced in his exchange with Weiss. In a section on “[t]he Labor Force: Definition” we are told that ‘[t]he baby has contributed more to the gaiety of nations than have all the nightclub comics in history. We include the comic in the labor force … as we include [his] wages in the national income but set no value on the endearing talents provided by the baby.” In discussing the then-fashionable notion that the aggregate labor force participation rate (like other Great Ratios) was “invariant to economic conditions” Lebergott notes that small changes can nevertheless have great import. “The United States Calvary,” he observes, “was sent to the State of Utah because of the difference between 1.0 wives per husband and a slightly greater number.” The remainder of the chapter considers segments of the labor force whose labor was, indeed, “responsive to economic conditions” — European immigrants, internal migrants, (some) women and children as well as the impact of social and political factors on labor supply; it demonstrates the extraordinary flexibility of the American labor force and its responsiveness to incentives. While this conclusion would not surprise anyone today it was, I think, quite revolutionary at the time. It is as good an example of any I know of the power of historical thinking to debunk conventional wisdom derived from today’s numbers.

By now the reader is accustomed to Lebergott’s modus operandi — the opening paragraph that sometimes seems to be beside the point but really isn’t; quotations in the text from travelogues, diaries, plays, literature and what-not; obscure (to say the least) references in the footnotes; all interspersed with economic reasoning that has more than a tinge of what would be called today “behavioral” economics. In Chapter Three Lebergott talks about the “process” of labor mobility, which is really one extended probing into the relationship between mobility of various sorts and wage differentials. We get to see some univariate regression lines, superimposed in scatter-plots of decade-by-decade changes in the labor force at, say, the state level, against initial wage rates. Generally, labor flows were directed at states with higher initial wage rates, although Lebergott is quick to assert that “[m]igrants suboptimized” because the cross-state pattern was far less apparent at the level of regions. Next, Lebergott takes on the notion that economic development is an inexorable process of labor shifting out of agriculture. The American case, Lebergott claimed, challenges this notion. American workers shifted out of agriculture when the economic incentives were right; that is, when the value of the marginal product of labor was higher outside of agriculture.

The remainder of Chapter 3 is divided into two brief sections, both of which contain some of the most interesting writing in the book. In “Social Mobility and the Division of Labor,” Lebergott examines the relationship between occupational specialization and growth. In the nineteenth century most workers possessed a myriad of skills, farmers especially. They were jacks of all trades, masters of none. Lebergott speculates that this was a good thing because the master of none was more inclined to try something new, rather than assume he was, well, the master and therefore knew everything. If some fraction of novel techniques were successful, this could (under strong assumptions) lead to a higher rate of technical progress. “Origins of the Factory System” considers the problem posed earlier in the book of assembling large numbers of workers at a given location. Rather than pay higher wages, manufacturers turned to an under-utilized source of labor, women and children. Some years later, the ideas presented in this section would develop in full bloom in a celebrated article by Claudia Goldin and Kenneth Sokoloff (Goldin and Sokoloff 1982) on the role of female and child labor in early industrialization.

At 89 pages, Chapter Four, “Some Consequences,” is the longest chapter in the book. The first few pages, highly influential, are given to the formation of a national labor market, revealed by changes over time in the coefficient of variation of wages across locations. We are then given an extended tour of the history of American real wages, back and forth between the relevant tables in the appendix, quotations from contemporaries and other anecdotal evidence. The “Determinants of Real Wage Trends” comes next. The first, productivity, is no surprise. The second, “Slavery,” isn’t really either, but here Lebergott’s contrarian instincts, I think, get the better of him. Lebergott would have the reader believe that, first, free and slave labor were close to perfect substitutes; and, second, slave rental rates contained a premium above what the slave would have commanded in a free labor market. Consequently, when slavery ended, wages fell and there was downward pressure on real wage growth for a time. No question that wages fell in the South after the Civil War but Lebergott’s analysis is incomplete at best. Slave labor was highly productive before the Civil War because of the gang system, and when the gang system ended, the demand for labor fell in the South. Because labor supplies were not perfectly elastic, wages fell too. “Immigration,” the third purported influence, had negative short run effects on wages but positive long run effects via productivity growth.

What follows next is a 25-page section that years later produced two high-profile controversies in macroeconomics. This is the (celebrated) section where Lebergott presents his long-term estimates of unemployment. In thinking today about his work, we would do well to remember that, at the time he prepared his estimates, the United States had only a relatively brief experience with the direct and regular measurement of unemployment, courtesy of the 1940 Census and the subsequent Current Population Survey (CPS). (By “direct” I mean answers to questions about a worker’s time allocation during a specific period of time — if you did not have a job during the survey week, were you looking for one?)

Like all the estimates in the book, Lebergott’s unemployment figures were the product of detailed, painstaking work that, inevitably, required strong assumptions. The fundamental problem was that, if one wanted annual estimates of unemployment, there was no way to obtain these directly from survey evidence prior to the CPS. For some benchmark dates one could produce tolerable direct estimates from the federal census, but the federal census was useless if one wanted to generate an estimate, say, for 1893 or, for that matter, 1933.

Lebergott’s solution was to rely on an identity. By definition, the labor force was the sum of employed and unemployed workers. One might not know the number of unemployed workers but perhaps one could extrapolate between benchmark dates the number of workers in the labor force and employment, one could estimate unemployment levels via subtraction.

The first high profile controversy involved Lebergott’s estimates for the 1930s, which included in the count of unemployed workers persons on work relief. After 1933 there were many such workers, and so, by historical standards, unemployment looks, of course, rather high. This generated a lot of theoretical work for macroeconomists who thought they had to explain how unemployment rates could remain above 10 percent while real wages were rising (after 1933).

Michael Darby (1976) suggested that this effort was misplaced because Lebergott “should” have included the persons on work relief in the count of employed workers. Darby showed that doing so made the recovery after 1933 look much more normal. I’ve written a few papers on this issue, and my view is somewhere in-between Darby and Lebergott (Margo 1991; Finegan and Margo 1994; see also Kesselman and Savin 1978). Ideally, in constructing labor force statistics we should be consistent over time, so if persons on work relief were “employed” in the 1930s we should consider adding, say, “workfare” recipients to the labor force (or, possibly, prisoners making license plates) today, but this ideal may not be achievable in practice. The real issue with New Deal work relief is not the resolution of a crusty debate between competing macroeconomic theories but whether the program affected individual behavior. Here I think the answer is a resounding yes — unemployed individuals in the 1930s did respond to incentives built into New Deal policies. Wives were far more likely to be “added workers” if their unemployed spouses had no work whatsoever, than if the spouse held a work relief job, so much so that, in the aggregate, the added work effect disappeared entirely in the late 1930s, because so many unemployed men were on work relief.

The second high-profile debate involved Christina Romer’s important work on the long-term properties of the American business cycle. Prior to her work it was (and in some quarters still is) a “stylized fact” that the business cycle today is less volatile than it was in the past. Lebergott’s original unemployment series combined with standard post-war series were often used to buttress claims that the macroeconomy become much more stable over time. Statistical measures of volatility estimated from the combined series clearly suggest this, whether volatility is measured by the average “distance” (in percentage points) between peaks and troughs or standard deviations.

Romer (1986) argued that, to a large degree, this apparent decline in volatility was a figment of the way the original data were constructed. In particular, in constructing his annual series, Lebergott assumed (among other things) that deviations in employment followed one-for-one deviations in output. Romer invoked Okun’s law, arguing that the true relationship was more like 1:3. Constructing post-war series by replicating (as close as possible) Lebergott’s procedures produced a new series that was not less volatile than the pre-war series, thereby contradicting the stylized fact that the macroeconomy became more stable over time. This was, needless to say, a controversial conclusion, with many subsequently weighing in. Now that the dust is settled, my own view — a view I think that many share, although I could be wrong — is that there is definitely something to Romer’s argument; at the very least, she demonstrated (as she claimed in her original article) that before one draws conclusions from historical time series, one should be very familiar with how the series are constructed. Chapter Four ends with another of Lebergott’s meditations on the alleged constancy of aggregate parameters — in this case, factor shares.

Chapter Five (“Some Inferences”) concludes the narrative portion of the book. It repeats the book’s earlier mantra that “Yankee ingenuity” and initiative, especially that embodied in immigrants, were central to American success as opposed, say, to “factor endowments.” It ruminates on how highly mobile labor influenced the choice of technique, in ways familiar to the first generation of cliometricians, especially those who found H.J. Habakkuk a source of (repeated) inspiration. It notes how “thickening markets” made finding continuous work easier over time, reducing the wage premium associated with unemployment risk. Today’s economic historians, infatuated with “institutions” v. “geography” would probably disagree with the emphases in the chapter but I think there is much to admire in Lebergott’s “inferences.”

Some economic historians make their mark as much through their graduate students as their writings. Lebergott spent his academic career in a liberal arts college and did not, therefore, directly produce graduate students like a William Parker, Robert Fogel or (more recently) Joel Mokyr. In certain ways he was an outsider to economic history, an economist with a vast and deep appreciation for history in all of its flavors, who saw the past for what it can say about the present, not as an end in itself like a more “traditional” historian would. Compared with other classic works of cliometrics such as Fogel’s Railroads and American Economic Growth or North and Thomas’s The Rise of the Western World, Manpower‘s quirkiness can be a frustrating, more suitable for dabbling than a sustained read. By today’s standards the book falls short in its treatment of racial and ethnic differences (gender is more balanced) although this would hardly distinguish it from most other work in economics and economic history at the time. Yet Lebergott’s influence on economic history has been profound. There are few activities that economic historians can engage in of greater consequence than reconstructing the hard numbers. In this line of work Lebergott had few peers. Manpower put the labor force — people — at the center of economic history, not the bloodless “agents” of economic models but real people. As if to underscore this, the style asserts, like a triple fff in music: a real person not a (bloodless) “social scientist” wrote this book, one in deep and abiding awe of the economic accomplishment of his forbearers.

References:

Darby, Michael. 1976. “Three and a Half Million US Employees Have Been Mislaid: Or, An Explanation of Unemployment, 1934-1941,” Journal of Political Economy 84 (February): 1-16.

Finegan, T. Aldrich and Robert A. Margo. 1994. “Work Relief and the Labor Force Participation of Married Women in 1940,” Journal of Economic History 54 (March): 64-84.

Goldin, Claudia and Kenneth Sokoloff. 1982. “Women, Children, and Industrialization in the Early Republic: Evidence from the Manufacturing Censuses,” Journal of Economic History 42 (December): 741-774.

Kesselman, Jonathan R. and N. E. Savin. 1978. “Three and a Half Million Workers Were Never Lost,” Economic Inquiry 16 (April): 186-191.

Lebergott, Stanley. 1964. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill.

Lebergott, Stanley. 1986. “Comment,” in Stanley Engerman and Robert Gallman, eds., Long Term Factors in American Economic Growth, pp. 671-673. Chicago: University of Chicago Press.

Margo, Robert A. 1991 “The Microeconomics of Depression Unemployment,” Journal of Economic History 51 (June): 333-341.

Margo, Robert A. and Georgia Villaflor. 1987. “The Growth of Wages in Antebellum America: New Evidence,” Journal of Economic History 47 (December): 873-895.

Romer, Christina. 1986. “Spurious Volatility in Historical Unemployment Data,” Journal of Political Economy 94 (February): 1-37.

Weiss, Thomas. 1986. “Revised Estimates of the United States Workforce, 1880-1860,” in Stanley Engerman and Robert Gallman, eds., Long Term Factors in American Economic Growth, pp.641-671. Chicago: University of Chicago Press.

Robert A. Margo is Professor of Economics and African-American Studies, Boston University, and Research Associate, National Bureau of Economic Research. He is also the editor of Explorations in Economic History.

Subject(s):Labor and Employment History
Geographic Area(s):North America
Time Period(s):20th Century: WWII and post-WWII