EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Public Sector Pensions in the United States

Lee A. Craig, North Carolina State University

Introduction

Although employer-provided retirement plans are a relatively recent phenomenon in the private sector, dating from the late nineteenth century, public sector plans go back much further in history. From the Roman Empire to the rise of the early-modern nation state, rulers and legislatures have provided pensions for the workers who administered public programs. Military pensions, in particular, have a long history, and they have often been used as a key element to attract, retain, and motivate military personnel. In the United States, pensions for disabled and retired military personnel predate the signing of the U.S. Constitution.

Like military pensions, pensions for loyal civil servants date back centuries. Prior to the nineteenth century, however, these pensions were typically handed out on a case-by-case basis; except for the military, there were few if any retirement plans or systems with well-defined rules for qualification, contributions, funding, and so forth. Most European countries maintained some type of formal pension system for their public sector workers by the late nineteenth century. Although a few U.S. municipalities offered plans prior to 1900, most public sector workers were not offered pensions until the first decades of the twentieth century. Teachers, firefighters, and police officers were typically the first non-military workers to receive a retirement plan as part of their compensation.

By 1930, pension coverage in the public sector was relatively widespread in the United States, with all federal workers being covered by a pension and an increasing share of state and local employees included in pension plans. In contrast, pension coverage in the private sector during the first three decades of the twentieth century remained very low, perhaps as low as 10 to 12 percent of the labor force (Clark, Craig, and Wilson 2003). Even today, pension coverage is much higher in the public sector than it is in the private sector. Over 90 percent of public sector workers are covered by an employer-provided pension plan, whereas only about half of the private sector work force is covered (Employee Benefit Research Institute 1997).

It should be noted that although today the term “pension” generally refers to cash payments received after the termination of one’s working years, typically in the form of an annuity, historically, a much wider range of retiree benefits, survivor’s annuities, and disability benefits were also referred to as pensions. In the United States, for example, the initial army and navy pension systems were primarily disability plans. However, disability was often liberally defined and included superannuation or the inability to perform regular duties due to infirmities associated with old age. In fact, every disability plan created for U.S. war veterans eventually became an old-age pension plan, and the history of these plans often reflected broader economic and social trends.

Early Military Pensions

Ancient Rome

Military pensions date from antiquity. Almost from its founding, the Roman Republic offered pensions to its successful military personnel; however, these payments, which often took the form of land or special appropriations, were generally ad hoc and typically based on the machinations of influential political cliques. As a result, on more than one occasion, a pension served as little more than a bribe to incite soldiers to serve as the personal troops of the politicians who secured the pension. No small amount of the turmoil accompanying the Republic’s decline can be attributed to this flaw in Roman public finance.

After establishing the Empire, Augustus, who knew a thing or two about the politics and economics of military issues, created a formal pension plan (13 BC): Veteran legionnaires were to receive a pension upon the completion of sixteen years in a legion and four years in the military reserves. This was a true retirement plan designed to reward and mollify veterans returning from Rome’s frontier campaigns. The original Augustan pension suffered from the fact that it was paid from general revenues (and Augustus’ own generous contributions), and in 5 AD (6 AD according to some sources), Augustus established a special fund (aeririum militare) from which retiring soldiers were paid. Although the length of service was also increased from sixteen years on active duty to twenty (and five years in the reserves), the pension system was explicitly funded through a five percent tax on inheritances and a one percent tax on all transactions conducted through auctions — essentially a sales tax. Retiring legionnaires were to receive 3,000 denarii; centurions received considerably larger stipends (Crook 1996). In the first century AD, a lump-sum payment of 3,000 denarii would have represented a substantial amount of money — at least by working class standards. A single denarius equaled roughly a days’ wage for a common laborer; so at an eight percent discount rate (Homer and Sylla 1991), the pension would have yielded an annuity of roughly 66 to 75 percent of a laborer’s annual earnings. Curiously, the basic parameters of the Augustan pension system look much like those of modern public sector pension plans. Although the state pension system perished with Rome, the key features — twenty to twenty-five years of service to quality and a “replacement rate” of 66 to 75 percent — would reemerge more than a thousand years later to become benchmarks for modern public sector plans.

Early-modern Europe

The Roman pension system collapsed, or perhaps withered away is the better term, with Rome itself, and for nearly a thousand years military service throughout Western Civilization was based on personal allegiance within a feudal hierarchy. During the Middle Ages, there were no military pensions strictly comparable to the Roman system, but with the establishment of the nation state came the reemergence of standing armies led by professional soldiers. Like the legions of Imperial Rome, these armies owed their allegiance to a state rather than to a person. The establishment of standardized systems of military pensions followed very shortly thereafter, beginning as early as the sixteenth century in England. During its 1592-93 session, Parliament established “reliefe for Souldiours … [who] adventured their lives and lost their limbs or disabled their bodies” in the service of the Crown (quoted in Clark, Craig, and Wilson 2003, p. 29). Annual pensions were not to exceed ten pounds for “private soldiers,” or twenty pounds for a “lieutenant.” Although one must be cautious in the use of income figures and exchange rates from that era, an annuity of ten pounds would have roughly equaled fifty gold dollars (at subsequent exchange rates), which was the equivalent of per capita income a century or so later, making the pension generous by contemporary standards.

These pensions were nominally disability payments not retirement pensions, though governments often awarded the latter on a case-by-case basis, and by the eighteenth century all of the other early-modern Great Powers — France, Austria, Spain, and Prussia — maintained some type of military pensions for their officer castes. These public pensions were not universally popular. Indeed, they were often viewed as little more than spoils. Samuel Johnson famously described a public pension as “generally understood to mean pay given to a state-hireling for treason to his country” (quoted in Clark, Craig, and Wilson 2003, 29). By the early nineteenth century, Britain, France, Prussia, and Spain all had formal retirement plans for their military personnel. The benchmark for these plans was the British “half-pay” system in which retired, disabled or otherwise unemployed officers received roughly fifty percent of their base pay. This was fairly lucrative compared to the annuities received by their continental counterparts.

Military Pensions in the United States

Prior to the American Revolution, Britain’s American colonies provided pensions to disabled men who were injured defending the colonists and their property from the French, the Spanish, and the natives. During the Revolutionary War the colonies extended this coverage to the members of their militias. Several colonies maintained navies, and they also offered pensions to their naval personnel. Independent of the actions of the colonial legislatures, the Continental Congress established pensions for its army (1776) and naval forces (1775). U.S. military pensions have been continuously provided, in one form or another ever since.

Revolutionary War Era

Although initially these were all strictly disability plans, in order to keep the troops in the field during the crucial months leading up to the Battle of Yorktown (1781), Congress authorized the payment of a life annuity, equal to one-half base pay, to all officers remaining in the service for the duration of the Revolution. It was not long before Congress and the officers in question realized that the national governments’ cash-flow situation and the present value of its future revenues were insufficient to meet this promise. Ultimately, the leaders of the disgruntled officers met at Newburgh, New York and pressed their demands on Congress, and in the spring of 1783, Congress converted the life annuities to a fixed-term payment equal to full pay for five years. Even these more limited obligations were not fully paid to qualifying veterans, and only the direct intervention of George Washington defused a potential coup (Ferguson 1961; Middlekauff 1982). The Treaty of Paris was signed in September of 1783, and the Continental Army was furloughed shortly thereafter. The officers’ pension claims were subsequently met to a degree by special interest-bearing “commutation certificates” — bonds, essentially. It took another eight years before the Constitution and Alexander Hamilton’s financial reforms placed the new federal government in a position to honor these obligations by the issuance of the new (consolidated) federal debt. However, because of the country’s precarious financial situation, between the Revolution and the consolidation of the debt, many embittered officers sold their “commutation” bonds in the secondary market at a steep discount.

In addition to a “regular” army pension plan, every war from the Revolution through the Indian Wars of the late-nineteenth century, saw the creation of a pension plan for the veterans of that particular war. Although every one of those plans was initially a disability plan, they were all eventually converted into an old-age pension plan — though this conversion often took a long time. The Revolutionary War plan became a general retirement plan in 1832 — 49 years after the Treaty of Paris ended the war. At that time every surviving veteran of the Revolutionary War received a pension equal to 100 percent of his base pay at the end of the war. Similarly, it was 56 years after the War of 1812, before survivors of that war were given retirement pensions.

Severance Pay

As for a retirement plan for the “regular” army, there was none until the Civil War; however, soldiers who were discharged after 1800 were given three months’ pay as severance. Officers were initially offered the same severance package as enlisted personnel, but in 1802, officers began receiving one months’ pay for each year of service over three years. Hence an officer with twelve years of service earning, say, $40 a month could, theoretically, convert his severance into an annuity, which at a six percent rate of interest would pay $2.40 a month, or less than $30 a year. This was substantially less than a prime farmhand could expect to earn and a pittance compared to that of, say, a British officer. Prior to the onset of the War of 1812, Congress supplemented these disability and severance packages with a type of retirement pension. Any soldier who enlisted for five years and who was honorably discharged would receive, in addition to his three months’ severance, 160 acres of land from the so-called military reserve. If he was killed in action or died in the service, his widow or heir(s) would receive the same benefit. The reservation price of public land at that time was $2.00 per acre ($1.64 for cash). So, the severance package would have been worth roughly $350, which, annuitized at six percent, would have yielded less than $2.00 a month in perpetuity. This was an ungenerous settlement by almost any standard. Of course in a nation of small farmers, a 160 acres might have represented a good start for a young cash-poor farmhand just out of the army.

The Army Develops a Retirement Plan

The Civil War resulted in a fundamental change in this system. Seeking the power to cull the active list of officers, the Lincoln administration persuaded Congress to pass the first general army retirement law. All officers could apply for retirement after 40 years of service, and a formal retirement board could retire any officer (after 40 years of service) who was deemed incapable of field service. There was a limit put on the number of officers who could be retired in this manner. Congress amended the law several times over the next few decades, with the key changes coming in 1870 and 1882. Taken together, these acts established 30 years as the minimum service requirement, 75 percent of base pay as the standard pension, and age 64 as the mandatory retirement age. This was the basic army pension plan until 1920, when Congress established the “up-or-out” policy in which an officer who was not deemed to be on track for promotion was retired. As such, he was to receive a retirement benefit equal to 2.5 percent multiplied by years of service not to exceed 75 percent of his base pay at the time of retirement. Although the maximum was reduced to 60 percent in 1924, it was subsequently increased back to 75 percent, and the service requirement was reduced to 20 years. As such, this remains the basic plan for military personnel to this day (Hustead and Hustead 2001).

Except for the disability plans that were eventually converted to old-page pensions, prior to 1885 the army retirement plan was only available to commissioned officers; however, in that year Congress created the first systematic retirement plan for enlisted personnel in the U.S. Army. Like the officers’ plan, it permitted retirement upon the completion of 30 years service at 75 percent of base pay. With the subsequent reduction in the minimum service requirement to 20 years, the enlisted plan merged with that for officers.

Naval Pensions

Until after World War I, the army and the navy maintained separate pension plans for their officers. The Continental Navy created a pension plan for its officers and seamen in 1775, even before an army plan was established. In the following year the navy plan was merged with the first army pension plan, and it too was eventually converted to a retirement plan for surviving veterans in 1832. The first disability pension plan for “regular” navy personnel was created in 1799. Officers’ benefits were not to exceed half-pay, while those for seamen and marines were not to exceed $5.00 a month, which was roughly 33 percent of an unskilled seaman’s base pay or 25 percent of that of a hired laborer in the private sector.

Except for the eventual conversion of the war pensions to retirement plans, there was no formal retirement plan for naval personnel until 1855. In that year Congress created a review board composed of five officers from each of the following ranks: captain, commander, and lieutenant. The board was to identify superannuated officers or those generally found to be unfit for service, and at the discretion of the Secretary of the Navy, the officers were to be placed on the reserve list at half-pay subject to the approval of the President. Before the plan had much impact the Civil War intervened, and in 1861 Congress established the essential features of the navy retirement plan, which were to remain in effect throughout the rest of the century. Like the army plan, retirement could occur through one of two ways: Either a retirement board could find the officer incapable of continuing on active duty, or after 40 years of service an officer could apply for retirement. In either case, officers on the retired list remained subject to recall; they were entitled to wear their uniforms; they were subject to the Articles of War and courts-martial; and they received 75 percent of their base pay. However, just as with the army certain constraints on the length of the retired list limited the effectiveness of the act.

In 1899, largely at the urging of then Assistant Secretary of the Navy Theodore Roosevelt, the navy adopted a rather Byzantine scheme for identifying and forcibly retiring officers deemed unfit to continue on active duty. Retirement (or “plucking”) boards were responsible for identifying those to be retired. Officers could avoid the ignominy of forced retirement by volunteering to retire, and there was a ceiling on the number who could be retired by the boards. In addition, all officers retired under this plan were to receive 75 percent of the sea pay of the next rank above that which they held at the time of retirement. (This last feature was amended in 1912, and officers simply received three-fourths of the pay of the rank in which they retired.) During the expansion of the navy leading up to America’s participation in the World War I, the plan was further amended, and in 1915 the president was authorized, with the advice and consent of the Senate, to reinstate any officer involuntarily retired under the 1899 act.

Still, the navy continued to struggle with its superannuated officers. In 1908, Congress finally granted naval officers the right to retire voluntarily at 75 percent of the active-duty pay upon the completion of 30 years of service. In 1916, navy pension rules were again altered, and this time a basic principle – “up or out” (with a pension) – was established, a principle which continues to this day. There were four basic components that differentiated the new navy pension plan from earlier ones. First, promotion to the ranks of rear admiral, captain, and commander were based on the recommendations of a promotion board. Prior to that time, promotions were based solely on seniority. Second, the officers on the active list were to be distributed among the ranks according to percentages that were not to exceed certain limits; thus, there was a limit placed on the number of officers who could be promoted to a certain rank. Third, age limits were placed on officers in each grade. Officers who obtained a certain age in a certain rank were retired with their pay equal to 2.5 percent multiplied by the number of years in service, with the maximum not to exceed 75 percent of their final active-duty pay. For example, a commander who reached age 50 and who had not been selected for promotion to captain, would be placed on the retired list. If he had served 25 years, then he would receive 62.5 percent of his base pay upon retirement. Finally, the act also imposed the same mandatory retirement provision on naval personnel as the 1882 (amended in 1890) act imposed on army personnel, with age 64 being established as the universal age of retirement in the armed forces of the United States.

These plans applied to naval officers only; however, in 1867 Congress authorized the retirement of seamen and marines who had served 20 or more years and who had become infirm as a result of old-age. These veterans would receive one-half their base pay for life. In addition, the act allowed any seaman or marine who had served 10 or more years and subsequently become disabled to apply to the Secretary of the Navy for a “suitable amount of relief” up to one-half base pay from the navy’s pension fund (see below). In 1899, the retirement act of 1885, which covered enlisted army personnel, was extended to enlisted navy personnel, with a few minor differences, which were eliminated in 1907. From that year, all enlisted personnel in both services were entitled to voluntarily retire at 75 percent of their pay and other allowances after 30 years’ of service, subsequently reduced to 20 years.

Funding U.S. Military Pensions

The history of pensions, particularly public sector pensions, cannot be easily separated from the history of pension finance. The creation of a pension plan coincides with the simultaneous creation of pension liabilities, and the parameters of the plan establish the size and the timing of those liabilities. U.S. Army pensions have always been funded on a “pay-as-you-go” basis from the general revenues of the U.S. Treasury. Thus army pensions have always been simply one more liability of the federal government. Despite the occasional accounting gimmick, the general revenues and obligations of the federal government are highly fungible, and so discussing the actuarial properties of the U.S. Army pension plan is like discussing the actuarial properties of the Department of Agriculture or the salaries of F.B.I. agents. However, until well into the twentieth century, this was not the case with navy pensions. They were long paid from a specific fund established separately from the general accounts of the treasury, and thus, their history is quite different from that of the army’s pensions.

From its inception in 1775, the navy’s pension plan for officers and seamen was financed with monies from the sale of captured prizes — enemy ships and those of other states carrying contraband. This funding mechanism meant that the flow of revenues needed to finance the navy’s pension liabilities were very erratic over time, fluctuating with the fortunes of war and peace. To manage these monies, the Continental Congress (and later the U.S. Congress) established the navy pension fund and allowed the trustees of this fund to invest the monies in a wide range of assets, including private equities. The history of the management of this pension fund illustrates many of the problems that can arise when public pension monies are used to purchase private assets. These include the loss of a substantial proportion of its assets on bad investments in private equities, the treasury’s bailout of the fund for these losses, and investment decisions that were influenced by political pressure. In addition there is evidence of gross malfeasance on the part of the agents of the fund, including trading on their on accounts, insider trading, and outright fraud.

Excluding a brief interlude just prior to the Civil War, the navy pension fund had a colorful history, lasting nearly one hundred and fifty years. Between its establishment in 1775 and 1842, it went bankrupt no less than three times, being bailed out by Congress each time. By 1842, there was little opportunity to continue to replenish the fund with fresh prize monies, and Congress, temporarily as it turned out, converted the navy pensions to a pay-as-you-go system, like army pensions. With the onset of the Civil War, the Union Navy’s blockade of Confederate ports created new prize opportunities, and the fund was reestablished, and navy pensions were once again paid from the prize fund. The fund subsequently accumulated an enormous balance. Like the antebellum losses of the fund, its postbellum surplus became something of a political football, and after much acrimonious debate, Congress took much of the fund’s balance and turned it over to the treasury. Still, the remnants of the fund persisted into the 1930s (Clark, Craig, and Wilson 2003).

Federal Civil Service Pensions

Like military pensions, pensions for loyal civil servants date back centuries; however, pension plans are of a more recent vintage, generally dating from the nineteenth century in Europe. In the United States, the federal government did not adopt a universal pension plan for civilian employees until 1920. This is not to say that there were no federal pensions before 1920. Pensions were available for some retiring civil servants, but Congress created them on a case-by-case basis. In the year before the federal pension plan went into effect, for example, there were 1,467 special acts of Congress either granting a new pension (912) or increasing the payments on old pensions (555) (Clark, Craig, and Wilson 2003). This process was as inefficient as it was capricious. Ending this system became a key objective of Congressional reforms.

The movement to create public sector pension plans at the turn of the twentieth century reflected the broader growth of the welfare state, particularly in Europe. As part of the progressive movement, many progressives envisioned the nascent European “cradle-to-grave” programs as the precursor of a better society, one with a new social covenant between the state and its people. Old-age pensions would fill the last step before the grave. Although the ultimate goal of this movement, universal old-age pensions, would not be realized until the creation of the social security system during the Great Depression, the initial objective was to have the government supply old-age security to its own workers. To support the movement in the United States, proponents of universal old-age pensions pointed out that by the early twentieth century, thirty-two countries around the world, including most of the European states and many regimes considered to be reactionary on social issues, had some type of old-age pension for their non-military public employees. If the Russians could humanely treat their superannuated civil servants, the argument went, why couldn’t the United States.

Establishing the Civil Service System

In the United States, the key to the creation of a civil service pension plan was the creation of a civil service. Prior to the late nineteenth century, the vast majority of federal employees were patronage employees — that is they served at the leisure of an elected or appointed official. With the tremendous growth of the number of such employees in the nineteenth century, the costs of the patronage system eventually outweighed the benefits derived from it. For example, over the century as a whole the number of post offices grew from 906 to 44,848; federal revenues grew from $3 million to over $400 million; and non-military employment went from 1,000 to 100,000. Indeed, the federal labor force nearly doubled in the 1870s alone (Johnson and Libecap 1994). The growth rates of these indicators of the size of the public sector are large even when compared to the dramatic fourteen-fold increase in U.S. population between 1800 and 1900. As a result, in 1883 Congress passed the Pendleton Act, which created the federal civil service, and which was passed largely, though not entirely, along party lines. As the party in power, the Republicans saw the conversion of federal employment from patronage to “merit” as an opportunity to gain the lifetime loyalty of an entire cohort of federal workers. In other words, by converting patronage jobs to civil service jobs, the party in power attempted to create lifetime tenure for its patronage workers. Of course, once in their civil service jobs, protected from the harshest effects of the market and the spoils system, federal workers simply did not want to retire — or put another way, many tended to retire on the job — and thus the conversion from patronage to civil service led to an abundance of superannuated federal workers. Thus began the quest for a federal pension plan.

Passage of the Federal Employees Retirement Act

A bill providing pensions for non-military employees of the federal government was introduced in every session of Congress between 1900 and 1920. Representatives of workers’ groups, the executive branch, the United States Civil Service Commission and inquiries conducted by congressional committees all requested or recommended the adoption of retirement plans for civil-service employees. While the political dynamics between these parties was often subtle and complex, the campaigns culminated in the passage of the Federal Employees Retirement Act on May 22, 1920 (Craig 1995). The key features of the original act of 1920 included:

  • All classified civil service employees qualified for a pension after reaching age 70 and rendering at least 15 years of service. Mechanics, letter carriers, and post office clerks were eligible for a pension after reaching age 65, and railway clerks qualified at age 62.
  • The ages at which employees qualified were also mandatory retirement ages. An employee could, however, be retained for two years beyond the mandatory age if his department head and the head of the Civil Service Commission approved.
  • All eligible employees were required to contribute two and one-half percent of their salaries or wages towards the payment of pensions.
  • The pension benefit was determined by the number of years of service. Class A employees were those who had served 30 or more years. Their benefit was 60 percent of their average annual salary during the last ten years of service. The benefits were scaled down through Class F employees (at least 15 years but less than 18 years of service). They received 30 percent of their average annual salary during the last ten years of service.

Although subsequently revised, this plan remains one of the two main civil service pension plans in the United States, and it served as something of a model for many subsequent pension plans in the United States. The other, newer federal plan, established in 1983, is a hybrid. That is, it has a traditional defined benefit component, a defined contribution component, and a Social Security component (Hustead and Hustead 2001).

State and Local Pensions

Decades before the states or the federal government provided civilian workers with a pension plan, several large American cities established plans for at least some of their employees. Until the first decades of the twentieth century, however, these plans were generally limited to three groups of employees: police officers, firefighters, and teachers. New York City established the first such plan for its police officers in 1857. Like the early military plans, the New York City police pension plan was a disability plan until a retirement feature was added in 1878 (Mitchell et al. 2001). Only a few other (primarily large) cities joined New York with a plan before 1900. In contrast, municipal workers in Austria-Hungary, Belgium, France, Germany, the Netherlands, Spain, Sweden, and the United Kingdom were covered by retirement plans by 1910 (Squier 1912).

Despite the relatively late start, the subsequent growth of such plans in the United States was rapid. By 1916, 159 cities had a plan for one or more of these groups of workers, and 21 of those cities included other municipal employees in some type of pension coverage (Monthly Labor Review, 1916). In 1917, 85 percent of cities with 100,000 or more residents paid some form of police pension; as did 66 percent of those with populations between 50,000 and 100,000; and 50 percent of cities with population between 30,000 and 50,000 had some pension liability (James 1921). These figures do not mean that all of these cities had a formal retirement plan. They only indicate that a city had at least $1 of pension liability. This liability could have been from a disability pension, a forced savings plan, or a discretionary pension. Still, by 1928, the Monthly Labor Review (April, 1928) could characterize police and fire plans as “practically universal”. At that time, all cities with populations of over 400,000 had a pension plan for either police officers or firefighters or both. Only one did not have a plan for police officers, and only one did not have a plan for firefighters. Several of those cities also had plans for their other municipal employees, and some cities maintained pension plans for their public school teachers separately from state teachers’ plans, which are reviewed below.

Eventually, some states also began to establish pension plans for state employees; however, initially these plans were primarily limited to teachers. Massachusetts established the first retirement pension plan for general state employees in 1911. The plan required workers to pay up to 5 percent of their salaries to a trust fund. Benefits were payable upon retirement. Workers were eligible to retire at age 60, and retirement was mandatory at age 70. At the time of retirement, the state purchased an annuity equal to twice the accumulated value (with interest) of the employee’s contribution. The calculation of the appropriate interest rate was, in many cases, not straightforward. Sometimes market rates or yields from a portfolio of assets were employed; sometimes a rate was simply established by legislation (see below). The Massachusetts plan initially became something of a model for subsequent public-sector pensions, but it was soon replaced by what became the standard public sector, defined benefit plan, much like the federal plan described above, in which the pension annuity was based on years of service and end-of-career earnings. Curiously, the Massachusetts plan resembled in some respects what have been referred to more recently as cash balance plans — hybrid plans that contain elements of both defined benefit and defined contribution plans.

Relative to the larger municipalities, the states were, in general, quite slow to adopt pension plans for their employees. As late as 1929, only six states had anything like a civil service pension plan for their (non-teacher) employees (Millis and Montgomery 1938). The record shows that pensions for state and local civil servants are for the most part, twentieth-century developments. However, after individual municipalities began adopting plans for their teachers in the early twentieth century, the states moved fairly aggressively in the 1910s and 1920s to create or consolidate plans for their other teachers. By the late 1920s, 21 states had formal retirement plans for their public school teachers (Clark, Craig, and Wilson 2003). On the one hand, this summary of state and local pension plans suggests that of all of the political units in the United States, the states themselves were the slowest to create pension plans for their civil service workers. However, this observation is slightly misleading. In 1930, 40 percent of all state and local employees were schoolteachers, and the 21 states that maintained a plan for their teachers included the most populous states at the time. While public sector pensions at the state and local level were far from universal by the 1920s, they did cover a substantial proportion of public sector workers, and that proportion was growing rapidly in the early decades of the twentieth century.

Funding State and Local Pensions

No discussion of the public sector pension plans would be complete without addressing the way in which the various plans were funded. The term “funded pension” is often used to mean a pension plan that had a specific source of revenues dedicated to pay for the plan’s liabilities. Historically, most public sector pension plans required some contribution from the employees covered by the plan, and in a sense, this contribution “funded” the plan; however, the term “funded” is more often taken to mean that the pension plan receives a stream of public funds from, for example, a specific source, such a share of property tax revenues. In addition, the term “actuarially sound” is often used to describe a pension plan in which the present value of tangible assets roughly equaled the present value of expected liabilities. Whereas one would logically expect an actuarially sound plan to be a funded plan, indeed a “fully funded” plan, a funded plan need not be actuarially sound, because it is possible that the flow of funds was simply too small to sufficiently cover liabilities.

Many early state and local plans were not funded at all; and fewer still were actuarially sound. Of course, in another sense, public sector pension plans are implicitly funded to the extent that they are backed by the coercive powers of the state. Through their monopoly of taxation, financially solvent and militarily successful states will be able to rely on their tax bases to fund their pension liabilities. Although this is exactly how most of the early state and local plans were ultimately financed, this is not what is typically meant by the term “funded plan”. Still, an important part of the history of state and local pensions revolves around exactly what happened to the funds (mostly employee contributions) that were maintained on behalf of the public sector workers.

Although the maintenance and operation of the state and local pension funds varied greatly during this early period, most plans required a contribution from workers, and this contribution was to be deposited in a so-called “annuity fund.” The assets of the fund were to be “invested” in various ways. In some cases the funds were invested “in accordance with the laws of the state governing the investment of savings bank funds.” In others the investments of the fund were to be credited “regular interest”, which was defined as, “the rate determined by the retirement board, and shall be substantially that which is actually earned by the fund of the retirement association.” This “rate” varied from state to state. In Connecticut, for example, it was literally a realized rate – i.e. a market rate. In Massachusetts, it was initially set at 3 percent by the retirement board, but subsequently it became a realized rate, which turned out to be roughly 4 percent in the late 1910s. In Pennsylvania, law set the rate at 4 percent. In addition, all three states created a “pension fund”, which contained the state’s contribution to the workers’ retirement annuity. In Connecticut and Massachusetts, this fund simply consisted of “such amounts as shall be appropriated by the general assembly from time to time.” In other words, the state’s share of the pension was on a “pay-as-you-go” basis. In Pennsylvania, however, the state actually contributed 2.8 percent of a teacher’s salary semi-annually to the state pension fund (Clark, Craig, and Wilson 2003).

By the late 1920s some states were basing their contributions to their teachers’ pension fund on actuarial calculations. The first states to adopt such plans were New Jersey, Ohio, and Vermont (Studenski 1920). What this meant in practice was that the state essentially estimated its expected future liability based on a worker’s experience, age, earnings, life expectancy, and so forth, and then deposited that amount into the pension fund. This was originally referred to as a “scientific” pension plan. These were truly funded and actuarially sound defined benefit plans.

As noted, several of the early plans paid an annuity based on the performance of the pension fund. The return on the fund’s portfolio is important because it would ultimately determine the soundness of the funding scheme and in some case the actual annuity the worker would receive. Even the funded, defined benefit plans based the worker’s and the employer’s contributions on expected earnings on the invested funds. How did these early state and local pension funds manage the assets they held? Several state plans restricted the plans to holding only those assets that could be held by state chartered mutual savings banks. Typically, these banks could hold federal, state, or local government debt. In most states, they could usually hold debt issued by private corporations and occasionally private equities. In the first half of the twentieth century, there were 19 states that chartered mutual savings banks. They were overwhelmingly in the Northeast, Midwest, and Far West — the same regions in which state and local pension plans were most prevalent. However, in most cases the corporate securities were limited to those on a so-called “legal list,” which was supposed to contain only the safest corporate investments. Admission to the legal list was based on a compilation of corporate assets, earnings, dividends, prior default records and so forth. The objective was to provide a list that consisted of the bluest of blue chip corporate securities. In the early decades of the twentieth century, these lists were dominated by railroad and public-utility issues (Hickman 1958). States, such as Massachusetts that did not restrict investments to those held by mutual savings banks, placed similar limits on state pension funds. Massachusetts limited investments to those that could be made in state-established “sinking funds”. Ohio explicitly limited its pension funds to U.S. debt, Ohio state debt, and the debt of any “county, village, city, or school district of the state of Ohio” (Studenski 1920).

Collectively, the objective of these restrictions was risk minimization — though the economics of that choice is not as simple it might appear. Cities and states that invested in their own municipal bonds faced an inherent moral hazard. Specifically, public employees might be forced to contribute a proportion of their earnings to their pension funds. If the city then purchased debt at par from itself for the pension fund when that debt might for various reasons not circulate at par on the open market, then the city could be tempted to go to the pension fund rather than the market for funds. This process would tend to insulate the city from the discipline of the market, which would in turn tend to cause the city to over-invest in activities financed in this way. Thus, the pension funds, actually the workers themselves, would essentially be forced to subsidize other city operations. In practice, the main beneficiaries would have been the contractors whose activities were funded by the workers’ pensions funds. At the time, these would have included largely sewer, water, and road projects. The Chicago police pension fund offers an example of the problem. An audit of the fund in 1912 reported: “It is to be regretted that there are no complete statistical records showing the operation of this fund in the city of Chicago.” As a recent history of pensions noted, “It is hard to imagine that the records were simply misplaced by accident” (Clark, Craig, and Wilson 2003, 213). Thus, like the U.S. Navy pension fund, the agents of these municipal and state funds faced a moral hazard that scholars are still analyzing more than a century later.

References

Clark, Robert L., Lee A. Craig, and Jack W. Wilson. A History of Public Sector Pensions. Philadelphia: University of Pennsylvania Press, 2003.

Craig, Lee A. “The Political Economy of Public-Private Compensation Differentials: The Case of Federal Pensions.” Journal of Economic History 55 (1995): 304-320.

Crook, J. A. “Augustus: Power, Authority, Achievement.” In The Cambridge Ancient History, edited by Alan K. Bowman, Edward Champlin, and Andrew Lintoff. Cambridge: Cambridge University Press, 1996.

Employee Benefit Research Institute. EBRI Databook on Employee Benefits. Washington, D. C.: EBRI, 1997.

Ferguson, E. James. Power of the Purse: A History of American Public Finance. Chapel Hill, NC: University of North Carolina Press, 1961.

Hustead, Edwin C., and Toni Hustead. “Federal Civilian and Military Retirement Systems.” In Pensions in the Public Sector, edited by Olivia S. Mitchell and Edwin C. Hustead, 66-104. Philadelphia: University of Pennsylvania Press, 2001.

James, Herman G. Local Government in the United States. New York: D. Appleton & Company, 1921.

Johnson, Ronald N., and Gary D. Libecap. The Federal Civil Service System and the Problem of Bureaucracy. Chicago: University of Chicago Press, 1994.

Middlekauff, Robert. The Glorious Cause: The American Revolution, 1763-1789. New York: Oxford University Press, 1982.

Millis, Harry A., and Royal E. Montgomery. Labor’s Risk and Social Insurance. New York: McGraw-Hill, 1938.

Mitchell, Olivia S., David McCarthy, Stanley C. Wisniewski, and Paul Zorn. “Developments in State and Local Pension Plans.” In Pensions in the Public Sector, edited by Olivia S. Mitchell and Edwin C. Hustead. Philadelphia: University of Pennsylvania Press, 2001.

Monthly Labor Review, various issues.

Squier, Lee Welling. Old Age Dependency in the United States. New York: Macmillan, 1912

Studenski, Paul. 1920. Teachers’ Pension Systems in the United States: A Critical and Descriptive Study. New York: D. Appleton and Company, 1920

Citation: Craig, Lee. “Public Sector Pensions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2003. URL http://eh.net/encyclopedia/public-sector-pensions-in-the-united-states/

Path Dependence

Douglas Puffert, University of Warwick

Path dependence is the dependence of economic outcomes on the path of previous outcomes, rather than simply on current conditions. In a path dependent process, “history matters” — it has an enduring influence. Choices made on the basis of transitory conditions can persist long after those conditions change. Thus, explanations of the outcomes of path-dependent processes require looking at history, rather than simply at current conditions of technology, preferences, and other factors that determine outcomes.

Path-dependent features of the economy range from small-scale technical standards to large-scale institutions and patterns of economic development. Several of the most prominent path-dependent features of the economy are technical standards, such as the “QWERTY” standard typewriter (and computer) keyboard and the “standard gauge” of railway track — i.e., the width between the rails. The case of QWERTY has been particularly controversial, and it is discussed at some length below. The case of track gauge is useful for introducing several typical features of path-dependent processes and their outcomes.

Standard Railway Gauges and the Questions They Suggest

Four feet 8-1/2 inches (1.435 meters) is the standard gauge for railways throughout North America, in much of Europe, and altogether on over half of the world’s railway routes. Indeed, it has been the most common gauge throughout the history of modern railways, since the late 1820s. Should we conclude, as economists often do for popular products or practices, that this standard gauge has proven itself technically and economically optimal? Has it been chosen because of its superior performance or lower costs? If so, has it proven superior for every new generation of railway technology and for all changes in traffic conditions? What of the other gauges, broader or narrower, that are used as local standards in some parts of the world — are these gauges generally used because different technology or different traffic conditions in those regions favor these gauges?

The answer to all these questions is no. The consensus of engineering opinion has usually favored gauges broader than 4’8.5″, and in the late nineteenth century an important minority of engineers favored narrower gauges. Nevertheless, the gauge of 4’8.5″ has always had greater use in practice because of the history of its use. Indeed, even the earliest modern railways adopted the gauge as a result of history. The “father of railways,” British engineer George Stephenson, had experience using the gauge on an older system of primitive coal tramways serving a small group of mines near Newcastle, England. Rather than determining optimal gauge anew for a new generation of railways, he simply continued his prior practice. Thus the gauge first adopted more than two hundred years ago for horse-drawn coal carts is the gauge now used for powerful locomotives, massive tonnages of freight shipments, and passenger trains traveling at speeds as great as 300 kilometers per hour (186 mph).

We will examine the case of railway track gauge in more detail below, along with other instances of path dependence. We first take an analytical look at what conditions may give rise to path dependence — or prevent it from arising, as some critics of the importance of path dependence have argued.

What Conditions Give Rise to Path Dependence?

Durability of Capital Equipment

The most trivial — and uninteresting — form of path dependence is based simply on the durability of capital equipment. Obsolete, inferior equipment may remain in use because its fixed cost is already “sunk” or paid for, while its variable costs are lower than the total costs of replacing it with a new generation of equipment. The duration of this sort of path dependence is limited by the service life of the obsolete equipment.

Technical Interrelatedness

In railways, none of the original gauge-specific capital equipment from the early nineteenth century remains in use today. Why, then, has Stephenson’s standard gauge persisted? Part of the reason is the technical interrelatedness of railway track and the wheel sets of rolling stock. When either track or rolling stock wears out, it must be replaced with equipment of the same gauge, so that the wheels will still fit the track and the track will still fit the wheels. Railways almost never replace all their track and rolling stock at the same time. Thus a gauge readily persists beyond the life of any piece of equipment that uses it.

Increasing Returns

A further reason for the persistence, and indeed spread, of the Stephenson gauge is increasing returns to the extent of use. Different railway companies or administrations benefit from using a common gauge, because this saves costs and improves both service quality and profits on through-shipments or passenger trips that pass over each other’s track. New railways have therefore nearly always adopted the gauge of established connecting lines, even when engineers have favored different gauges. Once built, railway lines are reluctant to change their gauge unless neighboring lines do so as well. This adds coordination costs to the physical costs of any conversion.

In early articles on path dependence, Paul David (1985, 1987) listed these same three conditions for path dependence: first, the technical interrelatedness of system components; second, increasing returns to scale in the use of a common technique; and, third, “quasi-irreversibility of investment,” for example in the durability of capital equipment (or of human capital). The third condition gives rise to switching costs, while the first two conditions make gradual change impractical and rapid change costly, due to the transactions costs required to coordinate the actions of different agents. Thus together, these three conditions may lend persistence or stability to a particular path of outcomes, “locking in” a particular feature of the economy, such as a standard railway track gauge.

David’s early work on path dependence represents, in part, the culmination of an earlier economic literature on technical interrelatedness (Veblen 1915; Frankel 1955; Kindleberger 1964; David 1975). By contrast, the other co-developer of the concept of path dependence, W. Brian Arthur, based his ideas on an analogy between increasing returns in the economy, particularly when expressed in the form of positive externalities, and conditions that give rise to positive feedbacks in the natural sciences.

Dynamic Increasing Returns to Adoption

In a series of theoretical papers starting in the early 1980s, Arthur (1989, 1990, 1994) emphasized the role of “increasing returns to adoption,” especially dynamic increasing returns that develop over time. These increasing returns might arise on the supply side of a market, as a result of learning effects that lower the cost or improve the quality of a product as its cumulative production increases. Alternatively, increasing returns might arise on the demand side of a market, as a result of positive “network” externalities, which raise the value of a product or technique for each user as the total number of users increases (Katz and Shapiro 1985, 1994). In the context of railways, for example, a railway finds a particular track gauge more valuable if a greater number of connecting railways use that gauge. (Note that a track gauge is not a “product” but rather a “technology,” as Arthur puts it, or a “technique,” as I prefer to call it.)

In Arthur’s (1989) basic analytical framework, “small events,” which he treated as random, lead to early fluctuations in the market shares of competing techniques. These fluctuations are magnified by positive feedbacks, because techniques with larger market shares tend to be more valuable to new adopters. As a result, one technique grows in market share until it is “locked in” as a de facto standard. In a simple version of Arthur’s model (Table 1), different consumers or firms initially favor different products or techniques. At first, market share for each technique fluctuates randomly, depending on how many early adopters happen to prefer each technique. Eventually, however, one of the techniques will gain enough of a lead in market share that it will offer higher payoffs to everyone — including to the consumers or firms that have a preference for the minority technique. For example, if the total number of adoptions for technique A reaches 80, while the number of adoptions of B is less than 60, then technique A offers higher payoffs for everyone, and it is locked in as the de facto standard.

Table 1. Adoption Payoffs in Arthur’s Basic Model

Number of previous adoptions 0 10 20 30 40 50 60 70 80 90
“R-type agents” (who prefer technique A):
Technique A 10 11 12 13 14 15 16 17 18 19
Technique B 8 9 10 11 12 13 14 15 16 17
“S-type agents” (who prefer technique B):
Technique A 8 9 10 11 12 13 14 15 16 17
Technique B 10 11 12 13 14 15 16 17 18 19

Source: Adapted from Arthur (1989).

Which of the competing techniques becomes the de facto standard is unpredictable on the basis of systematic conditions. Rather, later outcomes depend on the specific early history of the process. If early “small” events and choices are governed in part by non-systematic factors — even “historical accidents” — then these factors may have large effects on later outcomes. This is in contrast to the predictions of standard economic models, where decreasing returns and negative feedbacks diminish the impact of non-systematic factors. To cite another illustration from the history of railways, George Stephenson’s personal background was a non-systematic or “accidental” factor that, due to positive feedbacks, had a large influence on the entire subsequent history of track gauge.

Efficiency, Foresight, Remedies, and the Controversy over Path Dependence

Arthur’s (1989) basic model of a path-dependent process considered a case in which the selection of one outcome (or one path of outcomes) rather than another has no consequences for general economic efficiency — different economic agents favor different techniques, but no technique is best for all. Arthur also, however, used a variation of his modeling approach to argue that an inefficient outcome is possible. He considered a case where one technique offers higher payoffs than another for larger numbers of cumulative adoptions (technique B in Table 2), while for smaller numbers the other technique offers higher payoffs (technique A). Arthur argued that, given his model’s assumptions, each new adopter, arriving in turn, will prefer technique A and adopt only it, resulting later in lower total payoffs than would have resulted if each adopter had chosen technique B. Arthur’s assumptions were, first, that each agent’s payoff depends only on the number of previous adoptions and, second, that the competing techniques are “unsponsored,” that is, not owned and promoted by suppliers.

Table 2. Adoption Payoffs in Arthur’s Alternative Model

Number of previous adoptions 0 10 20 30 40 50 60 70 80 90
All agents:
Technique A 10 11 12 13 14 15 16 17 18 19
Technique B 4 7 10 13 16 19 22 25 28 31

Source: Arthur (1989), table 2.

Liebowitz and Margolis’s Critique of Arthur’s Model

Arthur’s discussion of efficiency provided the starting point for a theoretical critique of path dependence offered by Stan Liebowitz and Stephen E. Margolis (1995). Liebowitz and Margolis argued that two conditions, when present, prevent path-dependent processes from resulting in inefficient outcomes: first, foresight into the effects of choices and, second, opportunities to coordinate people’s choices, using direct communication, market interactions, and active product promotion. Using Arthur’s payoff table (Table 2), Liebowitz and Margolis argued that the purposeful, rational behavior of forward-looking, profit-seeking economic agents can override the effects of events in the past. In particular, if agents can foresee that some potential outcomes will be more efficient than others, then they have incentives to avoid the suboptimal ones. Agents who already own — or else find ways to appropriate — products or techniques that offer superior outcomes can often earn substantial profits by steering the process to favor those products or techniques. For the situation in Table 2, for example, the supplier of product or technique B could draw early adopters to that technique by temporarily setting a price below cost, making a profit by raising price above cost later.

Thus, in Liebowitz and Margolis’s analysis, the sort of inefficient or inferior outcomes that can arise in Arthur’s model are often not true equilibrium outcomes that market processes would lead to in the real world. Rather, they argued, purposeful behavior is likely to remedy any inferior outcome — except where the costs of a remedy, including transactions costs, are greater than the potential benefits. In that case, they argued, an apparently “inferior” outcome is actually the most efficient one available, once all costs are taken into account. “Remediable” inefficiency, they argued in contrast, is highly unlikely to persist.

Liebowitz and Margolis’s analysis gave rise to a substantial controversy over the meaning and implications of path dependence. In the view of Liebowitz and Margolis, the major claims of the economists who promote the concept of path dependence have amounted to assertions of remediable inefficiency. Liebowitz and Margolis coined the term “third-degree” path dependence to refer to such cases. They contrasted this category both to “first-degree” path dependence, which has no implications for efficiency, and to “second-degree” path dependence, where transactions costs and/or the impossibility of foresight lead to outcomes that offer lower payoffs than some hypothetical — but unattainable — alternative. In Liebowitz and Margolis’s view, only “third-degree” path dependence offers scope for optimizing behavior, and thus only this type stands in conflict with what they call “the neoclassical model of relentlessly rational behavior leading to efficient, and therefore predictable, outcomes” (1995). Only this category of path dependence, they argue, would constitute market failure. They cast strong doubt on the likelihood of its occurrence, and they asserted that no empirical examples have been demonstrated.

Responses to Liebowitz and Margolis’s Critique

Proponents of the importance of path dependence have responded, in large part, by asserting that the interesting features of path dependence have little to do with the question of remediability. David (1997, 2000) argued that the concept of third-degree path dependence proves incoherent upon close examination and that Liebowitz and Margolis had misconstrued the issues at stake. The present author asserted that one can usefully incorporate several of Liebowitz and Margolis’s ideas on foresight and forward-looking behavior into the theory of path dependence while still affirming the claims made by proponents (Puffert 2000, 2002, 2003).

Imperfect Foresight and Inefficiency

One point that I have emphasized is that the cases of path dependence cited by proponents typically involve imperfect foresight, and sometimes other features, that make remediation impossible. Indeed, proponents of the importance of path dependence partly recognized this point prior to the work of Liebowitz and Margolis. Nobel Prize-winner Kenneth Arrow argued in his foreword to Arthur’s collected articles that Arthur’s modeling approach applies specifically to cases where foresight is imperfect, or “expectations are based on limited information” (Arthur 1994). Thus, economic agents cannot foresee future payoffs, and they cannot know how best to direct the process to the outcomes they would prefer. In terms of the payoffs in Table 2, technique A might become locked-in because adopters as well as suppliers initially think, mistakenly, that technique A will continue to offer the higher payoffs. Similarly, David (1987) had argued still earlier that path dependence is sometimes of interest precisely because lock-in might happen too quickly, before the payoffs of different paths are known. Lock-in, as David and Arthur use the term, applies to a stable equilibrium — i.e., to an outcome that, if inefficient, is not remediable. (Liebowitz and Margolis introduce a different definition of lock-in.)

Imperfect foresight is, of course, a common condition — and especially common for new, unproven products (or techniques) in untested markets. Part of the difference between path-dependent and “path-independent” processes is that foresight doesn’t matter for path-independent processes. No matter what the path of events, path-independent processes still end up at unique outcomes that are predictable on the basis of fundamental conditions. Generally, these predictable outcomes are those that are most efficient and that offer the highest payoffs. By contrast, path-dependent processes have multiple potential outcomes, and the outcome selected is not necessarily the one offering the highest payoffs. This contrast to the results of standard economic analysis is part of what makes path dependence interesting.

Winners, Losers and Path Dependence

Path dependence is also interesting, however, when the issue at stake is not the overall efficiency (i.e., Pareto efficiency) of the outcome, but rather the distribution of rewards between “winners” and “losers” — for example, between firms competing to establish their products or techniques as a de facto standard, resulting in profits or economic rents to the winner only. This is something that finds no place in Liebowitz and Margolis’s taxonomy of “degrees.” In keeping with Liebowitz and Margolis’s analysis, competing firms certainly exercise forward-looking behavior in efforts to determine the outcome, but imperfect information and imperfect control over circumstances still make the outcome path dependent, as some of the case studies below illustrate.

Lack of Agreement on What the Debate Is About

Finally, market failure per se has never been the primary concern of proponents of the importance of path dependence. Even when proponents have highlighted inefficiency as one possible consequence of path dependence, this inefficiency is often the result of imperfect foresight rather than of market failure. Market failure is, however, the primary concern of Liebowitz and Margolis. This difference in perspective is one reason that the arguments of proponents and opponents have often failed to meet head on, as we shall consider in several case studies.

These contrasting analytical arguments can best be assessed through empirical cases. The case of the QWERTY keyboard is considered first, because it has generated the most controversy and it illustrates opposing arguments. Three further cases are particularly useful for the lessons they offer. Britain’s “coal wagon problem” offers a strong example of inefficiency. The worldwide history of railway track gauge, now considered at greater length, illustrates the roles of foresight (or lack thereof) and transitory circumstances, as well as the role of purposeful behavior to remedy outcomes. The case of competition in videocassette recorders illustrates how path dependence is compatible with purposeful behavior, and it shows how proponents and critics of the importance of path dependence can offer different interpretations of the same events.

The Debate over QWERTY

The most influential empirical case has been that of the “QWERTY” standard typewriter and computer keyboard, named for the first letters appearing on the top row of keys. The concept of path dependence first gained widespread attention through David’s (1985, 1986) interpretation of the emergence and persistence of the QWERTY standard. The critique of path dependence began with the alternative interpretation offered by Liebowitz and Margolis (1990).

David (1986) noted that the QWERTY keyboard was designed, in part, to reduce mechanical jamming on an early typewriter design that quickly went out of use, while other early keyboards were designed more with the intention of facilitating fast, efficient typing. In David’s account, QWERTY’s triumph over its initial revivals resulted largely from the happenstance that typing schools and manuals offered instruction in eight-finger “touch” typing first for QWERTY. The availability of trained typists encouraged office managers to buy QWERTY machines, which in turn gave further encouragement to budding typists to learn QWERTY. These positive feedbacks increased QWERTY’s market share until it was established as the de facto standard keyboard.

Furthermore, according to David, similar positive feedbacks have kept typewriter users “locked in” to QWERTY, so that new, superior keyboards could gain no more than a small foothold in the market. In particular the Dvorak Simplified Keyboard, introduced during the 1930s, has been locked out of the market despite experiments showing its superior ergonomic efficiency. David concluded that our choice of a keyboard even today is governed by history, not by what would be ergonomically and economically optimal apart from history.

Liebowitz and Margolis (1990) directed much of their counterargument to the alleged superiority of the Dvorak keyboard. They showed, indeed, that claims David cited for the dramatic superiority of the Dvorak keyboard were based on dubious experiments. The experiments that Liebowitz and Margolis prefer support the conclusion that it could never be profitable to retrain typists from QWERTY to the Dvorak keyboard. Moreover, Liebowitz and Margolis cited ergonomic studies that conclude that the Dvorak keyboard offers at most only a two to six percent efficiency advantage over QWERTY.

Liebowitz and Margolis did not address David’s proposed mechanism for the original triumph of QWERTY. Instead, they argued against the claims of some popular accounts that QWERTY owes its success largely to the demonstration effect of winning a single early typing contest. Liebowitz and Margolis showed that other, well-known typing contests were won by non-QWERTY typists, and so they cast doubt on the impact of a single historical accident. This, however, did not address the argument that David made about that one typing contest. David’s argument was that the contest’s modest impact consisted largely in vindicating the effectiveness of eight-finger touch-typing, which was being taught at the time only for QWERTY.

Although Liebowitz and Margolis never addressed David’s claims about the role of third-party typing instruction, they did argue that suppliers had opportunities to offer training in conjunction with selling typewriters to new offices, so that non-QWERTY keyboards would not have been disadvantaged. They did not, however, present evidence that suppliers actually offered such training during the early years of touch-typing, the time when QWERTY became dominant. Whether the early history of QWERTY was path dependent thus seems to depend largely on the unaddressed question of how much typing instruction was offered directly by suppliers, as Liebowitz and Margolis suggest could have happened, and how much was offered by third parties using QWERTY, as David showed did happen.

Liebowitz and Margolis showed that early typewriter manufacturers competed vigorously in the features of their machines. They inferred, therefore, that the reason that typewriter suppliers increasingly supported and promoted QWERTY must have been that it offered a competitive advantage as the most effective system available. This reasoning is plausible, but it was not supported by direct evidence. The alternative, path-dependent explanation would be that QWERTY’s competitive advantage in winning new customers consisted largely in its lead in trained typists and market share. That is, positive feedbacks would have affected the decisions of customers and, thus, also suppliers. David presented some evidence for this, although, in light of the issues raised by Liebowitz and Margolis, this evidence might now appear less than conclusive.

Liebowitz and Margolis highlighted the following lines from David’s article: “… competition in the absence of perfect futures markets drove the industry prematurely into de facto standardization on the wrong system — and that is where decentralized decision-making subsequently has sufficed to hold it” (emphasis original in David’s article). In Liebowitz and Margolis’s view, the focus here on decentralized decision-making constitutes a claim for market failure and third-degree path dependence, and they treat this as the central claim of David’s article. In the view of the present author, this interpretation is mistaken. David’s claim here plays only a minor role in his argument — indeed it is less than one sentence. Moreover, it is not clear that David’s comment about decentralized decision-making amounts to anything more than a reference to the high transactions costs that would be entailed in organizing a coordinated movement to an alternative outcome — a point that Liebowitz and Margolis themselves have argued in other (non-QWERTY) contexts. (A coordinated change would be necessary because few typists would wish to learn a non-QWERTY system unless they could be sure of conveniently finding a compatible keyboard wherever they go.) David may have wished to suggest that centralized decision-making (by government?) would have greatly reduced these transactions costs, but David made no explicit claim that such a remedy would be feasible. If David had wished to make market failure or remediable inefficiency the central focus of his claims for path dependence, then he surely could and would have done so in a more explicit and forceful manner.

Part of what remains of the case of QWERTY is modest support for David’s central claim that history has mattered, leaving us with a standard keyboard that is less efficient than alternatives available today — not as inefficient as the claims David cited, but still somewhat so. Donald Norman, one of the world’s leading authorities on ergonomics, estimates on the basis of several recent studies that QWERTY is about 10 percent less efficient than the Dvorak keyboard and other alternatives (Norman, 1990, and recent personal correspondence).

For Liebowitz and Margolis, it was most important to show that the costs of switching to an alternative keyboard would outweigh any benefits, so that there is no market failure in remaining with the QWERTY standard. This claim appears to stand. David had made no explicit claim for market failure, but Liebowitz and Margolis — as well, indeed, as some supporters of David’s account — took that as the main issue at stake in David’s argument.

Britain’s “Silly Little Bobtailed” Coal Wagons

A strong example of inefficiency in path dependence is offered by the small coal wagons that persisted in British railway traffic until the mid-twentieth century. Already in 1915, economist Thorstein Veblen cited these “silly little bobtailed carriages” as an example of how industrial modernization may be inhibited by “the restraining dead hand of … past achievement,” that is, the historical legacy of interrelated physical infrastructure: “the terminal facilities, tracks, shunting facilities, and all the ways and means of handling freight on this oldest and most complete of railway systems” (Veblen, 1915, pp. 125-8). Veblen’s analysis was the starting point for the literature on technical and institutional interrelatedness that formed the background to David’s early views on path dependence.

In recent years Van Vleck (1997, 1999) has defended the efficiency of Britain’s small coal wagons, arguing that they offered “a crude just-in-time approach to inventory” for coal users while economizing on the substantial costs of road haulage that would have been necessary for small deliveries if railway coal wagons were larger. More recently, however, Scott (1999, 2001) presented evidence that few coal users benefited from small deliveries. Rather, he showed, the wagons’ small size, widely dispersed ownership and control, antiquated braking and lubrication systems, and generally poor physical condition made them quite inefficient indeed. Replacing these cars and associated infrastructure with modern, larger wagons owned and controlled by the railways would have offered savings in railway operating costs of about 56 percent and a social rate of return of about 24 percent. Nevertheless, the small wagons were not replaced until both railways and collieries were nationalized after World War II. The reason, according to Scott, lay partly in the regulatory system that allocated certain rights to collieries and other car owners at the expense of the railways, and partly in the massive coordination problem that arose because railways would not have realized much savings in costs until a large proportion of antiquated cars were replaced. Together, these factors lowered the railways’ realizable private rate of return below profitable levels. (Van Vleck’s smaller estimates for potential efficiency gains from scrapping the small wagons were largely the result of assuming that there would be no change in the regulatory system or in the ownership and control of wagons. Scott argued that such changes added greatly to the potential cost savings.)

Scott noted that the persistence of small wagons was path dependent, because both the technology embodied in the small wagons and the institutions that supported fragmented ownership long outlasted the earlier, transitory conditions to which they were a rational response. Ownership of wagons by the collieries had been advantageous to railways as well as collieries in the mid-nineteenth century, and government regulation had assigned rights in a way designed to protect the interests of wagon owners from opportunistic behavior by the railways. By the early twentieth century, these regulatory institutions imposed a heavy burden on the railways, because they required either conveyance even of antiquated wagons for set rates or else payment of high levels of compensation to the wagon owners. The requirement for compensation helped to raise the railways’ private costs of scrapping the small wagons above the social costs of doing so.

The case shows the relevance of Paul David’s approach to path dependence, with its discussion of technical (and institutional) interrelatedness and quasi-irreversible investment, above and beyond Brian Arthur’s more narrow focus on increasing returns.

The case also supports Liebowitz and Margolis’s insight that an inferior path-dependent outcome can only persist where transactions costs (and other costs) prevent remediation, but it undercuts those authors’ skepticism toward the possibility of market failure. The high transactions costs that would have been entailed in scrapping Britain’s small wagons indeed outweighed the potential gains, but these costs were high only due to the institutions of property rights that supported fragmented ownership. When these institutions were later changed, a remedy to Britain’s coal-wagon problem followed quickly. Thus, the failure to scrap the small wagons earlier can be ascribed to institutional and market failure.

The case thus appears to satisfy Liebowitz and Margolis’s criterion for “third-degree” path dependence. This is not completely clear, however. Whether Britain’s coal-wagon problem qualifies for that status depends on whether the benefits of solving the problem would have been worth the cost of implementing the necessary institutional changes, a question that Scott did not address. Liebowitz and Margolis argue that an inferior outcome cannot be considered a result of market failure, or even meaningfully inefficient, unless this criterion of remediability is satisfied.

In this present author’s view, Liebowitz and Margolis’s criterion has some usefulness in the context of considering government policy toward inferior outcomes, which is Liebowitz and Margolis’s chief concern, but the criterion is much less useful for a more general analysis of these outcomes. If Britain’s coal-wagon problem does not qualify for “third-degree” status, then it suggests that Liebowitz and Margolis’s dismissive approach toward cases that they relegate to “second-degree” status is misplaced. The case seems to show that path dependence can have substantial effects on the economy, that the outcomes of path-dependent processes can vary substantially from the predictions of standard economic models, that these outcomes can exhibit substantial inefficiency of a sort discussed by proponents of path dependence, and that all this can happen despite the exercise of foresight and forward-looking behavior.

Railway Track Gauges

The case of railway track gauge illustrates how “accidental” or “contingent” events and transitory circumstances can affect choice of technique and economic efficiency over a period now approaching two centuries (Puffert 2000, 2002). The gauge now used on over half the world’s railways, 4 feet 8.5 inches (4’8.5″, 1435 mm), comes from the primitive mining tramway where George Stephenson gained his early experience. Stephenson transferred this gauge to the Liverpool and Manchester Railway, opened in 1830, which served as the model of best practice for many of the earliest modern railways in Britain, continental Europe, and North America. Many railway engineers today view this gauge as narrower than optimal. Yet, although they would choose a broader gauge today if the choice were open, they do not view potential gains in operating efficiency as worth the costs of conversion.

A much greater source of inefficiency has been the emergence of diversity in gauge. Six gauges came into widespread use in North America by the 1870s, and Britain’s extensive Great Western Railway system maintained a variant gauge for over half a century until 1892. Even today, Australia and Argentina each have three different regional-standard gauges, while India, Chile, and several other countries each make extensive use of two gauges. Breaks of gauge also persist at the border of France and Spain and most external borders of the former Russian and Soviet empires. This diversity adds costs and impairs service in interregional and international traffic. Where diversity has been resolved, conversion costs have sometimes been substantial.

This diversity arose as a result of several contributing factors: limited foresight, the search for an improved railway technology, transitory circumstances, and contingent events or “historical accidents.” Many early railway builders sought simply to serve local or regional transportation needs, and they did not foresee the later importance of railways in interregional traffic. Beginning in the late 1830s, locomotive builders found their ability to construct more powerful, easily maintained engines constrained by the Stephenson gauge, while some civil engineers thought that a broader gauge would offer improved capacity, speed, and passenger comfort. This led to a wave of adoption of broad gauges for new regions in Europe, the Americas, South Asia, and Australia. Changes in locomotive design soon eliminated much of the advantage of broad gauges, and by the 1860s it became possible to take advantage of the ability of narrow gauges to make sharper curves, following the contours of rugged landscape and reducing the need for costly bridges, embankments, cuttings, and tunnels. This, together with the beliefs of some engineers and promoters that narrow gauges would offer savings in operating costs, led to a wave of introductions of narrow gauges to new regions.

At every point of time there was some variation in engineering opinion and practice, so that which gauge was introduced to each new region often depended on the contingent circumstances of who decided the gauge. To cite only the most fateful example, Stephenson’s rivals for the contract to build the Liverpool and Manchester Railway proposed to adopt the gauge of 5’6″ (1676 mm). If that team had been employed, or if Stephenson had gained his earlier experience on almost any other mining tramway, then the ensuing worldwide history of railway gauge would have been different — perhaps far different.

After the introduction of particular gauges to new regions, later railways nearly always adopted the gauge of established connecting lines, reinforcing early contingent choices with positive feedbacks. As different local common-gauge regions expanded, regions that happened to have the same gauge merged into one another, but breaks of gauge emerged between regions of differing gauge. The extent of diversity that emerged at the national and continental levels, and thus the relative efficiency of the outcome, thus depended on earlier contingent events.

Once these patterns of diversity had been established by a path-dependent process, they were partly rationalized by the sort of forward-looking, profit-seeking behavior proposed by Liebowitz and Margolis. In North America, for example, a continental standard emerged quickly after demand for interregional transport grew, and standardization was facilitated both by the formation of interregional railway systems and by cooperation among independent railways. Elsewhere as well, much of the most inefficient diversity was resolved relatively quickly. Nonetheless, a costly diversity has persisted in places where variant-gauge regions had grown large and costly to convert before the value of conversion became apparent. Spain’s variant gauge has become more costly in recent years as the country’s economy has been integrated into that of the European Union, but estimated costs of (U.S.) $5 billion have precluded conversion. India and Australia have only recently made substantial progress toward the resolution of their century-old diversity.

Wherever gauge diversity has been resolved, it is one of the earliest gauges that has emerged as the standard. In no significant part of the world has current practice in gauge broken free of its early history. The inefficiency that has resulted, relative to what other sequences of events might have produced, was not the result of market failure. Rather, it resulted primarily from the natural inability of railway builders to foresee how railway networks and traffic patterns would develop and how technology would evolve.

The case also illustrates the usefulness of Arthur’s (1989) modeling approach for cases of unsponsored techniques and limited foresight (Puffert 2000, 2002). These were essentially the conditions Arthur assumed in proposing his model.

Videocassette Recording Systems

Markets for technical systems exhibiting network externalities (where users benefit from using the same system as other users) often tend to give rise to de facto standards — one system used by all. Foreseeing this, suppliers sometimes join to offer a common system standard from the outset, precluding any possibility for path-dependent competition. Examples include first-generation compact discs (CDs and CD-ROMs) and second-generation DVDs.

In the case of consumer videocassette recorders (VCRs), however, Sony with its Betamax system and JVC with its VHS system were unable to agree on a common set of technical specifications. This gave rise to a celebrated battle between the systems lasting from the mid-1970s to the mid-1980s. Arthur (1990) used this competition as the basis for a thought experiment to illustrate path dependence. He explained the triumph of VHS as the result of positive feedbacks in the video film rental market, as video rental stores stocked more film titles for the system with the larger user base, while new adopters chose the system for which they could rent more videos. He also suggested tentatively that, if the common perception that Betamax offered a superior picture quality is true, then the “the market’s choice” was not the best possible outcome.

In a closer look at the case, Cusumano et al. (1992) showed that Arthur’s suggested positive-feedback mechanism was real, and that this mechanism explains why Sony eventually withdrew Betamax from the market rather than continuing to offer it as an alternative system. However, they also showed that the video rental market emerged only at a late stage in the competition, after VHS already had a strong lead in market share. Thus, Arthur’s mechanism does not explain how the initial symmetry in competitors’ positions was broken.

Cusumano et al. argued, nonetheless, that the earlier competition already had a path-dependent market-share dynamic. They presented evidence that suppliers and distributors of VCRs increasingly chose to support VHS rather than Betamax because they saw other market participants doing so, leading them to believe that VHS would win the competition and emerge as a de facto standard. The authors did not make clear, however, why market participants believed that a single system would become so dominant. (In a private communication, coauthor Richard Rosenbloom said that this was largely because they foresaw the later emergence of a market for prerecorded videos.)

The authors argue that three early differences in promoters’ strategies gave VHS its initial lead. First, Sony proceeded without major co-sponsors for its Betamax system, while JVC shared VHS with several major competitors. Second, the VHS consortium quickly installed a large manufacturing capacity. Third, Sony opted for a more compact videocassette, while JVC chose instead a longer playing time for VHS. In the event, a longer playing time proved more important to many consumers and distributors, at least during early years of the competition when Sony cassettes could not accommodate a full (U.S.) football game.

This interpretation shows how purposeful, forward-looking behavior interacted with positive feedbacks in producing the final outcome. The different strategies, made under conditions of limited foresight, were contingent decisions that set competition among the firms on one path rather than another (Puffert 2003). Furthermore, the early inability of Sony cassettes to accommodate a football game was a transitory circumstance that may have affected outcomes long afterward.

Liebowitz and Margolis’s (1995) initial interpretation of the case responded only to Arthur’s brief discussion. They argued that the playing-time advantage for VHS was the crucial factor in the competition, so that VHS won because its features most closely matched consumer demand — and not due to path dependence. Although their discussion covers part of the same ground as that of Cusumano et al., Liebowitz and Margolis did not respond to the earlier article’s argument that the purposeful behavior of suppliers interacted with positive feedbacks. Rather, they treated this purposeful behavior as the antithesis of the mechanistic, non-purposeful evolution of market share that they see as the ultimate basis of path dependence.

Liebowitz and Margolis also presented substantial evidence that Betamax was not, in fact, a superior system for the consumer market. The primary concern of their argument was to refute a suggested case of path-dependent lock-in to an inferior technique, and in this they succeeded. It is arguable that they overstated their case, however, in asserting that what they refuted amounted to a claim for “third-degree” path dependence. Arthur had not argued that the selection of VHS, if inferior to Betamax, would have been remediable.

Recently, Liebowitz (2002) did respond to Cusumano et al. He argued, in part, that the larger VHS tape size offered a permanent rather than transitory advantage, as this size facilitated higher tape speeds and thus better picture quality for any given total playing time.

A Brief Discussion of Further Cases

Pest Control

Cowan and Gunby (1996) showed that there is path dependence in farmers’ choices between systems of chemical pest control and integrated pest management (IPM). IPM relies in part on predatory insects to devour harmful ones, and the drift of chemical pesticides from neighboring fields often makes the use of IPM impossible. Predatory insects also drift among fields, further raising farmers’ incentives to use the same techniques as neighbors. To be practical, IPM must be used on the whole set of farms that are in proximity to each other. Where this set is large, the transactions costs of persuading all farmers to forego chemical methods often prevent adoption. In addition to these localized positive feedbacks, local learning effects also make the choice between systems path dependent. The path-dependent local lock-in of each technique has sometimes been upset by such developments as invasions by new pests and the emergence of resistance to pesticides.

Nuclear Power Reactors

Cowan (1990) argued that transitory circumstances led to the establishment of the dominant “light-water” design for civilian nuclear power reactors. This design, adapted from power plants for nuclear submarines, was rushed into use during the Cold War because the political value of demonstrating peaceful uses for nuclear technology overrode the value of finding the most efficient technique. Thereafter, according to Cowan, learning effects arising from engineering experience for the light-water design continued to make it the rational choice for new reactors. He argued that there are fundamental scientific and engineering reasons for believing, however, that an equivalent degree of development of alternative designs may have made them superior.

Information Technology

Although Shapiro and Varian (1998) did not emphasize the term path dependence, they pointed to a broad range of research documenting positive feedbacks that affect competition in contemporary information technology. Like Morris and Ferguson (1993), they showed how competing firms recognize and seek to take advantage of these positive feedbacks. Strictly speaking, not all of these cases are path dependent, because in some cases firms have been able to control the direction and outcome of the allocation processes. In other cases, however, the allocation process has had its own path-dependent dynamic, affected both by the attempts of rival firms to promote their products and by factors that are unforeseen or out of their control.

Among the cases that Shapiro and Varian discuss are some involving Microsoft. In addition, some proponents of the importance of path dependence have argued that positive feedbacks favor Microsoft’s competitive position in ways that hinder competitors from developing and introducing innovative products (see, for example, Reback et al., 1995). Liebowitz and Margolis (2000), by contrast, offered evidence of cases where superior computer software products have had no trouble winning markets. Liebowitz and Margolis also argued that the lack of demonstrated empirical examples of “third-degree” path dependence creates a strong presumption against the existence of an inferior outcome that government antitrust measures could remedy.

Path Dependence at Larger Levels

Geography and Trade

The examples thus far all treat path dependence in the selection of alternative products or techniques. Krugman (1991, 1994) and Arthur (1994) have also pointed to a role for contingent events and positive feedbacks in economic geography, including in the establishment of Silicon Valley and other concentrations of economic activity. Some of these locations, they showed, are the result not of systematic advantages but rather of accidental origins reinforced by “agglomeration” economies that lead new firms to locate in the vicinity of similar established firms. Krugman (1994) also discussed how these same effects produce path dependence in patterns of international trade. Geographic patterns of economic activity, some of which arise as a result of contingent historical events, determine the patterns of comparative advantage that in turn determine patterns of trade.

Institutional Development

Path dependence also arises in the development of institutions — a term that economists use to refer to the “rules of the game” for an economy. Eichengreen (1996) showed, for example, that the emergence of international monetary systems, such as the classical gold standard of the late nineteenth century, was path dependent. This path dependence has been based on the benefits to different countries of adopting a common monetary system. Eichengreen noted that these benefits take the form of network externalities. Puffert (2003) has argued that path dependence in institutions is likely to be similar to path dependence in technology, as both are based on the value of adopting a common practice — some technique or rule — that becomes costly to change.

Thus path dependence can affect not only individual features of the economy but also larger patterns of economic activity and development. Indeed, some teachers of economic history interpret major regional and national patterns of industrialization and growth as partly the result of contingent events reinforced by positive feedbacks — that is, as path dependent. Some suggest, as well, that the institutions responsible for economic development in some parts of the world and those responsible for backwardness in others are, at least in part, path dependent. In the coming years we may expect these ideas to be included in a growing literature on path dependence.

Conclusion

Path dependence arises, ultimately, because there are increasing returns to the adoption of some technique or other practice and because there are costs in changing from an established practice to a different one. As a result, many current features of the economy are based on what appeared optimal or profit-maximizing at some point in the past, rather than on what might be preferred on the basis of current general conditions.

The theory of path dependence is not an alternative to neoclassical economics but rather a supplement to it. The theory of path dependence assumes, generally, that people optimize on the basis of their own interests and the information at their disposal, but it highlights ways that earlier choices put constraints on later ones, channeling the sequence of economic outcomes along one possible path rather than another. This theory offers reason to believe that some — or perhaps many — economic processes have multiple possible paths of outcomes, rather than a unique equilibrium (or unique path of equilibria). Thus the selection among outcomes may depend on nonsystematic or “contingent” choices or events. Empirical case studies offer examples of how such choices or events have led to the establishment, and “lock in,” of particular techniques, institutions, and other features of the economy that we observe today — although other outcomes would have been possible. Thus, the analysis of path dependence adds to what economists know on the basis of more established forms of neoclassical analysis.

It is not possible at this time to assess the overall importance of path dependence, either in determining individual features of the economy or in determining larger patterns of economic activity. Research has only partly sorted out the concrete conditions of technology, interactions among agents, foresight, and markets and other institutions that make allocation path dependent in some cases but not in others (Puffert 2003; see also David 1997, 1999, 2000 for recent refinements on theoretical conditions for path dependence).

Addendum: Technical Notes on Definitions

Path dependence, as economists use the term, corresponds closely to what mathematicians call non-ergodicity (David 2000). A non-ergodic stochastic process is one that, as it develops, undergoes a change in the limiting distribution of future states, that is, in the probabilities of different outcomes in the distant future. This is somewhat different from what mathematicians call path dependence. In mathematics, a stochastic process is called path dependent, as opposed to state dependent, if the probabilities of transition to alternative states depend not simply on the current state of the system but, additionally, on previous states.

Furthermore, the term path dependence is applied to economic processes in which small variations in early events can lead to large or discrete variations in later outcomes, but generally not to processes in which small variations in events lead only to small and continuous variations in outcomes. That is, the term is used for cases where positive feedbacks magnify the impact of early events, not for cases where negative feedbacks diminish this impact over time.

The term path dependence can also be used for cases in which the impact of early events persists without appreciably increasing or decreasing over time. The most important examples would be instances where transitory conditions have large, persistent impacts.

References

Arthur, W. Brian. 1989. “Competing Technologies, Increasing Returns, and Lock-in by Historical Events.” Economic Journal 99: 116‑31.

Arthur, W. Brian. 1990. “Positive Feedbacks in the Economy.” Scientific American 262 (February): 92-99.

Arthur, W. Brian. 1994. Increasing Returns and Path Dependence in the Economy. Ann Arbor: University of Michigan Press.

Cowan, Robin. 1990. “Nuclear Power Reactors: A Study in Technological Lock-in.” Journal of Economic History 50: 541-67.

Cowan, Robin, and Philip Gunby. 1996. “Sprayed to Death: Path Dependence, Lock-in and Pest Control Strategies.” Economic Journal 106: 521-42.

Cusumano, Michael A., Yiorgos Mylonadis, and Richard S. Rosenbloom. 1992. “Strategic Maneuvering and Mass-Market Dynamics: The Triumph of VHS over Beta.” Business History Review 66: 51-94.

David, Paul A. 1975. Technical Choice, Innovation and Economic Growth: Essays on American and British Experience in the Nineteenth Century. Cambridge: Cambridge University Press.

David, Paul A. 1985. “Clio and the Economics of QWERTY.” American Economic Review (Papers and Proceedings) 75: 332-37.

David, Paul A. 1986. “Understanding the Economics of QWERTY: The Necessity of History.” In W.N. Parker, ed., Economic History and the Modern Economist. Oxford: Oxford University Press.

David, Paul A. 1987. “Some New Standards for the Economics of Standardization in the Information Age.” In P. Dasgupta and P. Stoneman, eds., Economic Policy and Technological Performance. Cambridge, England: Cambridge University Press.

David, Paul A. 1997. “Path Dependence and the Quest for Historical Economics: One More Chorus of the Ballad of QWERTY.” University of Oxford Discussion Papers in Economic and Social History, Number 20. http://www.nuff.ox.ac.uk/economics/history/paper20/david3.pdf

David, Paul A. 1999. ” At Last, a Remedy for Chronic QWERTY-Skepticism!” Working paper, All Souls College, Oxford University. http://www.eh.net/Clio/Publications/remedy.shtml

David, Paul A. 2000. “Path Dependence, Its Critics and the Quest for ‘Historical Economics’.” Working paper, All Souls College, Oxford University.
http://www-econ.stanford.edu/faculty/workp/swp00011.html

Eichengreen, Barry. 1996 Globalizing Capital: A History of the International Monetary System. Princeton: Princeton University Press.

Frankel, M. 1955. “Obsolescence and Technological Change in a Maturing Economy.” American Economic Review 45: 296-319.

Katz, Michael L., and Carl Shapiro. 1985. “Network Externalities, Competition, and Compatibility.” American Economic Review 75: 424-40.

Katz, Michael L., and Carl Shapiro. 1994. “Systems Competition and Network Effects.” Journal of Economic Perspectives 8: 93-115.

Kindleberger, Charles P. 1964. Economic Growth in France and Britain, 1851-1950. Cambridge, MA: Harvard University Press.

Krugman, Paul. 1991. “Increasing Returns and Economic Geography.” Journal of Political Economy 99: 483-99.

Krugman, Paul. 1994. Peddling Prosperity. New York: W.W. Norton.

Liebowitz, S.J. 2002. Rethinking the Network Economy. New York: AMACOM

Liebowitz, S.J., and Stephen E. Margolis. 1990. “The Fable of the Keys.” Journal of Law and Economics 33: 1-25.

Liebowitz, S.J., and Stephen E. Margolis. 1995. “Path Dependence, Lock-In, and History.” Journal of Law, Economics, and Organization 11: 204-26. http://wwwpub.utdallas.edu/~liebowit/paths.html

Liebowitz, S.J., and Stephen E. Margolis. 2000. Winners, Losers, and Microsoft. Oakland: The Independent Institute.

Morris, Charles R., and Charles H. Ferguson. 1993. “How Architecture Wins Technology Wars.” Harvard Business Review (March-April): 86-96.

Norman, Donald A. 1990. The Design of Everyday Things. New York: Doubleday. (Originally published in 1988 as The Psychology of Everyday Things.)

Puffert, Douglas J. 2000. “The Standardization of Track Gauge on North American Railways, 1830-1890.” Journal of Economic History 60: 933-60.

Puffert, Douglas J. 2002. “Path Dependence in Spatial Networks: The Standardization of Railway Track Gauge.” Explorations in Economic History 39: 282-314.

Puffert, Douglas J. 2003 forthcoming. “Path Dependence, Network Form, and Technological Change.” In W. Sundstrom, T. Guinnane, and W. Whatley, eds., History Matters: Essays on Economic Growth, Technology, and Demographic Change. Stanford: Stanford University Press. http://www.vwl.uni-muenchen.de/ls_komlos/nettech1.pdf

Reback, Gary, Susan Creighton, David Killam, and Neil Nathanson. 1995. “Technological, Economic and Legal Perspectives Regarding Microsoft’s Business Strategy in Light of the Proposed Acquisition of Intuit, Inc.” (“Microsoft White Paper”). White paper, law firm of Wilson, Sonsini, Goodrich & Rosati. http://www.antitrust.org/cases/microsoft/whitep.html

Scott, Peter. 1999. “The Efficiency of Britain’s ‘Silly Little Bobtailed’ Coal Wagons: A Comment on Van Vleck.” Journal of Economic History 59: 1072-80.

Scott, Peter. 2001. “Path Dependence and Britain’s ‘Coal Wagon Problem’.” Explorations in Economic History 38: 366-85.

Shapiro, Carl and Hal R. Varian. 1998. Information Rules. Cambridge, MA: Harvard Business School Press.

Van Vleck, Va Nee L. 1997. “Delivering Coal by Road and Rail in Britain: The Efficiency of the ‘Silly Little Bobtailed’ Coal Wagons.” Journal of Economic History 57: 139-160.

Van Vleck, Va Nee L. 1999. “In Defense (Again) of ‘Silly Little Bobtailed’ Coal Wagons: Reply to Peter Scott.” Journal of Economic History 59: 1081-84.

Veblen, Thorstein. 1915. Imperial Germany and the Industrial Revolution. London: Macmillan.

Citation: Puffert, Douglas. “Path Dependence”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/path-dependence/

An Economic History of Patent Institutions

B. Zorina Khan, Bowdoin College

Introduction

Such scholars as Max Weber and Douglass North have suggested that intellectual property systems had an important impact on the course of economic development. However, questions from other eras are still current today, ranging from whether patents and copyrights constitute optimal policies toward intellectual inventions and their philosophical rationale to the growing concerns of international political economy. Throughout their history, patent and copyright regimes have confronted and accommodated technological innovations that were no less significant and contentious for their time than those of the twenty-first century. An economist from the nineteenth century would have been equally familiar with considerations about whether uniformity in intellectual property rights across countries harmed or benefited global welfare and whether piracy might be to the advantage of developing countries. The nineteenth and early twentieth centuries in particular witnessed considerable variation in the intellectual property policies that individual countries implemented, and this allows economic historians to determine the consequences of different rules and standards.

This article outlines crucial developments in the patent policies of Europe, the United States, and follower countries. The final section discusses the harmonization of international patent laws that occurred after the middle of the nineteenth century.

Europe

The British Patent System

The grant of exclusive property rights vested in patents developed from medieval guild practices in Europe. Britain in particular is noted for the establishment of a patent system which has been in continuous operation for a longer period than any other in the world. English monarchs frequently used patents to reward favorites with privileges, such as monopolies over trade that increased the retail prices of commodities. It was not until the seventeenth century that patents were associated entirely with awards to inventors, when Section 6 of the Statute of Monopolies (21 Jac. I. C. 3, 1623, implemented in 1624) repealed the practice of royal monopoly grants to all except patentees of inventions. The Statute of Monopolies allowed patent rights of fourteen years for “the sole making or working of any manner of new manufacture within this realm to the first and true inventor…” Importers of foreign discoveries were allowed to obtain domestic patent protection in their own right.

The British patent system established significant barriers in the form of prohibitively high costs that limited access to property rights in invention to a privileged few. Patent fees for England alone amounted to £100-£120 ($585) or approximately four times per capita income in 1860. The fee for a patent that also covered Scotland and Ireland could cost as much as £350 pounds ($1,680). Adding a co-inventor was likely to increase the costs by another £24. Patents could be extended only by a private Act of Parliament, which required political influence, and extensions could cost as much as £700. These constraints favored the elite class of those with wealth, political connections or exceptional technical qualifications, and consciously created disincentives for inventors from humble backgrounds. Patent fees provided an important source of revenues for the Crown and its employees, and created a class of administrators who had strong incentives to block proposed reforms.

In addition to the monetary costs, complicated administrative procedures that inventors had to follow implied that transactions costs were also high. Patent applications for England alone had to pass through seven offices, from the Home Secretary to the Lord Chancellor, and twice required the signature of the Sovereign. If the patent were extended to Scotland and Ireland it was necessary to negotiate another five offices in each country. The cumbersome process of patent applications (variously described as “mediaeval” and “fantastical”) afforded ample material for satire, but obviously imposed severe constraints on the ordinary inventor who wished to obtain protection for his discovery. These features testify to the much higher monetary and transactions costs, in both absolute and relative terms, of obtaining property rights to inventions in England in comparison to the United States. Such costs essentially restricted the use of the patent system to inventions of high value and to applicants who already possessed or could raise sufficient capital to apply for the patent. The complicated system also inhibited the diffusion of information and made it difficult, if not impossible, for inventors outside of London to readily conduct patent searches. Patent specifications were open to public inspection on payment of a fee, but until 1852 they were not officially printed, published or indexed. Since the patent could be filed in any of three offices in Chancery, searches of the prior art involved much time and inconvenience. Potential patentees were well advised to obtain the help of a patent agent to aid in negotiating the numerous steps and offices that were required for pursuit of the application in London.

In the second half of the eighteenth century, nation-wide lobbies of manufacturers and patentees expressed dissatisfaction with the operation of the British patent system. However, it was not until after the Crystal Palace Exhibition in 1851 that their concerns were finally addressed, in an effort to meet the burgeoning competition from the United States. In 1852 the efforts of numerous societies and of individual engineers, inventors and manufacturers over many decades were finally rewarded. Parliament approved the Patent Law Amendment Act, which authorized the first major adjustment of the system in two centuries. The new patent statutes incorporated features that drew on testimonials to the superior functioning of the American patent regime. Significant changes in the direction of the American system included lower fees and costs, and the application procedures were rationalized into a single Office of the Commissioners of Patents for Inventions, or “Great Seal Patent Office.”

The 1852 patent reform bills included calls for a U.S.-style examination system but this was amended in the House of Commons and the measure was not included in the final version. Opponents were reluctant to vest examiners with the necessary discretionary power, and pragmatic observers pointed to the shortage of a cadre of officials with the required expertise. The law established a renewal system that required the payment of fees in installments if the patentee wished to maintain the patent for the full term. Patentees initially paid £25 and later installments of £50 (after three years) and £100 (after seven years) to maintain the patent for a full term of fourteen years. Despite the relatively low number of patents granted in England, between 1852 and 1880 the patent office still made a profit of over £2 million. Provision was made for the printing and publication of the patent records. The 1852 reforms undoubtedly instituted improvements over the former opaque procedures, and the lower fees had an immediate impact. Nevertheless, the system retained many of the former features that had implied that patents were in effect viewed as privileges rather than merited rights, and only temporarily abated expressions of dissatisfaction.

One source of dissatisfaction that endured until the end of the nineteenth century was the state of the common law regarding patents. At least partially in reaction to a history of abuse of patent privileges, patents were widely viewed as monopolies that restricted community rights, and thus to be carefully monitored and narrowly construed. Second, British patents were granted “by the grace of the Crown” and therefore were subject to any restrictions that the government cared to impose. According to the statutes, as a matter of national expediency, patents were to be granted if “they be not contrary to the law, nor mischievous to the State, by raising prices of commodities at home, or to the hurt of trade, or generally inconvenient.” The Crown possessed the ability to revoke any patents that were deemed inconvenient or contrary to public policy. After 1855, the government could also appeal to a need for official secrecy to prohibit the publication of patent specifications in order to protect national security and welfare. Moreover, the state could commandeer a patentee’s invention without compensation or consent, although in some cases the patentee was paid a royalty.

Policies towards patent assignments and trade in intellectual property rights also constrained the market for inventions. Ever vigilant to protect an unsuspecting public from fraudulent financial schemes on the scale of the South Sea Bubble, ownership of patent rights was limited to five investors (later extended to twelve). Nevertheless, the law did not offer any relief to the purchaser of an invalid or worthless patent, so potential purchasers were well advised to engage in extensive searches before entering into contracts. When coupled with the lack of assurance inherent in a registration system, the purchase of a patent right involved a substantive amount of risk and high transactions costs — all indicative of a speculative instrument. It is therefore not surprising that the market for assignments and licenses seems to have been quite limited, and even in the year after the 1852 reforms only 273 assignments were recorded.

In 1883 new legislation introduced procedures that were somewhat simpler, with fewer steps. The fees fell to £4 for the initial term of four years, and the remaining £150 could be paid in annual increments. For the first time, applications could be forwarded to the Patent Office through the post office. This statute introduced opposition proceedings, which enabled interested parties to contest the proposed patent within two months of the filing of the patent specifications. Compulsory licenses were introduced in 1883 (and strengthened in 1919 as “licenses of right”) for fear that foreign inventors might injure British industry by refusing to grant other manufacturers the right to use their patent. The 1883 act provided for the employment of “examiners” but their activity was limited to ensuring that the material was patentable and properly described. Indeed, it was not until 1902 that the British system included an examination for novelty, and even then the process was not regarded as stringent as in other countries. Many new provisions were designed to thwart foreign competition. Until 1907 patentees who manufactured abroad were required to also make the patented product in Britain. Between 1919 and 1949 chemical products were excluded from patent protection to counter the threat posed by the superior German chemical industry. Licenses of right enabled British manufacturers to compel foreign patentees to permit the use of their patents on pharmaceuticals and food products.

In sum, changes in the British patent system were initially unforthcoming despite numerous calls for change. Ultimately, the realization that England’s early industrial and technological supremacy was threatened by the United States and other nations in Europe led to a slow process of revisions that lasted well into the twentieth century. One commentator summed up the series of developments by declaring that the British patent system at the time of writing (1967) remained essentially “a modified version of a pre-industrial economic institution.”

The French Patent System

Early French policies towards inventions and innovations in the eighteenth century were based on an extensive but somewhat arbitrary array of rewards and incentives. During this period inventors or introducers of inventions could benefit from titles, pensions that sometimes extended to spouses and offspring, loans (some interest-free), lump-sum grants, bounties or subsidies for production, exemptions from taxes, or monopoly grants in the form of exclusive privileges. This complex network of state policies towards inventors and their inventions was revised but not revoked after the outbreak of the French Revolution.

The modern French patent system was established according to the laws of 1791 (amended in 1800) and 1844. Patentees filed through a simple registration system without any need to specify what was new about their claim, and could persist in obtaining the grant even if warned that the patent was likely to be legally invalid. On each patent document the following caveat was printed: “The government, in granting a patent without prior examination, does not in any manner guarantee either the priority, merit or success of an invention.” The inventor decided whether to obtain a patent for a period of five, ten or fifteen years, and the term could only be extended through legislative action. Protection extended to all methods and manufactured articles, but excluded theoretical or scientific discoveries without practical application, financial methods, medicines, and items that could be covered by copyright.

The 1791 statute stipulated patent fees that were costly, ranging from 300 livres through 1500 livres, based on the declared term of the patent. The 1844 statute maintained this policy since fees were set at 500 francs ($100) for a five year patent, 1000 francs for a 10 year patent and 1500 for a patent of fifteen years, payable in annual installments. In an obvious attempt to limit international diffusion of French discoveries, until 1844 patents were voided if the inventor attempted to obtain a patent overseas on the same invention. On the other hand, the first introducer of an invention covered by a foreign patent would enjoy the same “natural rights” as the patentee of an original invention or improvement. Patentees had to put the invention into practice within two years from the initial grant, or face a tribunal which had the power to repeal the patent, unless the patentee could point to unforeseen events which had prevented his complying with the provisions of the law. The rights of patentees were also restricted if the invention related to items that were controlled by the French government, such as printing presses and firearms.

In return for the limited monopoly right, the patentee was expected to describe the invention in such terms that a workman skilled in the arts could replicate the invention and this information was expected to be made public. However, no provision was made for the publication or diffusion of these descriptions. At least until the law of April 7 1902, specifications were only available in manuscript form in the office in which they had originally been lodged, and printed information was limited to brief titles in patent indexes. The attempt to obtain information on the prior art was also inhibited by restrictions placed on access: viewers had to state their motives; foreigners had to be assisted by French attorneys; and no extract from the manuscript could be copied until the patent had expired.

The state remained involved in the discretionary promotion of invention and innovation through policies beyond the granting of patents. In the first place, the patent statutes did not limit their offer of potential appropriation of returns only to property rights vested in patents. The inventor of a discovery of proven utility could choose between a patent or making a gift of the invention to the nation in exchange for an award from funds that were set aside for the encouragement of industry. Second, institutions such as the Société d’encouragement pour l’industrie nationale awarded a number of medals each year to stimulate new discoveries in areas they considered to be worth pursuing, and also to reward deserving inventors and manufacturers. Third, the award of assistance and pensions to inventors and their families continued well into the nineteenth century. Fourth, at times the Society purchased patent rights and turned the invention over into the public domain.

The basic principles of the modern French patent system were evident in the early French statutes and were retained in later revisions. Since France during the ancien régime was likely the first country to introduce systematic examinations of applications for privileges, it is somewhat ironic that commentators point to the retention of registration without prior examination as the defining feature of the “French system” until 1978. In 1910 fees remained high, although somewhat lower in real terms, at one hundred francs per year. Working requirements were still in place, and patentees were not allowed to satisfy the requirement by importing the article even if the patentee had manufactured it in another European country. However, the requirement was waived if the patentee could persuade the tribunal that the patent was not worked because of unavoidable circumstances.

Similar problems were evident in the market for patent rights. Contracts for patent assignments were filed in the office of the Prefect for the district, but since there was no central source of information it was difficult to trace the records for specific inventions. The annual fees for the entire term of the patent had to be paid in advance if the patent was assigned to a second party. Like patents themselves, assignments and licenses were issued with a caveat emptor clause. This was partially due to the nature of patent property under a registration system, and partially to the uncertainties of legal jurisprudence in this area. For both buyer and seller, the uncertainties associated with the exchange likely reduced the net expected value of trade.

The Spanish Patent System

French patent laws were adopted in its colonies, but also diffused to other countries through its influence on Spain’s system following the Spanish Decree of 1811. The Spanish experience during the nineteenth century is instructive since this country experienced lower rates and levels of economic development than the early industrializers. Like its European neighbors, early Spanish rules and institutions were vested in privileges which had lasting effects that could be detected even in the later period. The per capita rate of patenting in Spain was lower than other major European countries, and foreigners filed the majority of patented inventions. Between 1759 and 1878, roughly one half of all grants were to citizens of other countries, notably France and (to a lesser extent) Britain. Thus, the transfer of foreign technology was a major concern in the political economy of Spain.

This dependence on foreign technologies was reflected in the structure of the Spanish patent system, which permitted patents of introduction as well as patents for invention. Patents of introduction were granted to entrepreneurs who wished to produce foreign technologies that were new to Spain, with no requirement of claims to being the true inventor. Thus, the sole objective of these instruments was to enhance innovation and production in Spain. Since the owners of introduction patents could not prevent third parties from importing similar machines from abroad, they also had an incentive to maintain reasonable pricing structures. Introduction patents had a term of only five years, with a cost of 3000 reales, whereas the fees of patents for invention varied from 1000 reales for five years, 3000 reales for ten years, and 6000 reales for a term of fifteen years. Patentees were required to work the patent within one year, and about a quarter of patents granted between 1826 and 1878 were actually implemented. Since patents of introduction had a brief term, they encouraged the production of items with high expected profits and a quick payback period, after which monopoly rights expired, and the country could benefit from its diffusion.

The German Patent System

The German patent system was influenced by developments in the United States, and itself influenced legislation in Argentina, Austria, Brazil, Denmark, Finland, Holland, Norway, Poland, Russia and Sweden. The German Empire was founded in 1871, and in the first six years each state adopted its own policies. Alsace-Lorraine favored a French-style system, whereas others such as Hamburg and Bremen did not offer patent protection. However, after strong lobbying by supporters of both sides of the debate regarding the merits of patent regimes, Germany passed a unified national Patent Act of 1877.

The 1877 statute created a centralized administration for the grant of a federal patent for original inventions. Industrial entrepreneurs succeeded in their objective of creating a “first to file” system, so patents were granted to the first applicant rather than to the “first and true inventor,” but in 1936 the National Socialists introduced a first to invent system. Applications were examined by examiners in the Patent Office who were expert in their field. During the eight weeks before the grant, patent applications were open to the public and an opposition could be filed denying the validity of the patent. German patent fees were deliberately high to eliminate protection for trivial inventions, with a renewal system that required payment of 30 marks for the first year, 50 marks for the second year, 100 marks for the third, and 50 marks annually after the third year. In 1923 the patent term was extended from fifteen years to eighteen years.

German patent policies encouraged diffusion, innovation and growth in specific industries with a view to fostering economic development. Patents could not be obtained for food products, pharmaceuticals or chemical products, although the process through which such items were produced could be protected. It has been argued that the lack of restrictions on the use of innovations and the incentives to patent around existing processes spurred productivity and diffusion in these industries. The authorities further ensured the diffusion of patent information by publishing claims and specification before they were granted. The German patent system also facilitated the use of inventions by firms, with the early application of a “work for hire” doctrine that allowed enterprises access to the rights and benefits of inventions of employees.

Although the German system was close to the American patent system, it was in other ways more stringent, resulting in patent grants that were lower in number, but likely higher in average value. The patent examination process required that the patent should be new, nonobvious, and also capable of producing greater efficiency. As in the United States, once granted, the courts adopted an extremely liberal attitude in interpreting and enforcing existing patent rights. Penalties for willful infringement included not only fines, but also the possibility of imprisonment. The grant of a patent could be revoked after the first three years if the patent was not worked, if the owner refused to grant licenses for the use of an invention that was deemed in the public interest, or if the invention was primarily being exploited outside of Germany. However, in most cases, a compulsory license was regarded as adequate.

After 1891 a parallel and weaker version of patent protection could be obtained through a gebrauchsmuster or utility patent (sometimes called a petty patent), which was granted through a registration system. Patent protection was available for inventions that could be represented by drawings or models with only a slight degree of novelty, and for a limited term of three years (renewable once for a total life of six years). About twice as many utility patents as examined patents were granted early in the 1930s. Patent protection based on co-existing systems of registration and examination appears to have served distinct but complementary purposes. Remedies for infringement of utility patents also included fines and imprisonment.

Other European Patent Systems

Very few developed countries would now seriously consider eliminating statutory protection for inventions, but in the second half of the nineteenth century the “patent controversy” in Europe pitted advocates of patent rights against an effective abolitionist movement. For a short period, the abolitionists were strong enough to obtain support for dismantling patent systems in a number of European countries. In 1863 the Congress of German Economists declared “patents of invention are injurious to common welfare;” and the movement achieved its greatest victory in Holland, which repealed its patent legislation in 1869. The Swiss cantons did not adopt patent protection until 1888, with an extension in the scope of coverage in 1907. The abolitionists based their arguments on the benefits of free trade and competition, and viewed patents as part of an anticompetitive and protectionist strategy analogous to tariffs on imports. Instead of state-sponsored monopoly awards, they argued, inventors could be rewarded by alternative policies, such as stipends from the government, payments from private industry or associations formed for that purpose, or simply through the lead time that the first inventor acquired over competitors by virtue of his prior knowledge.

According to one authority, the Netherlands eventually reinstated its patent system in 1912 and Switzerland introduced patent laws in 1888 largely because of a keen sense of morality, national pride and international pressure to do so. The appeal to “morality” as an explanatory factor is incapable of explaining the timing and nature of changes in strategies. Nineteenth-century institutions were not exogenous and their introduction or revisions generally reflected the outcome of a self-interested balancing of costs and benefits. The Netherlands and Switzerland were initially able to benefit from their ability to free-ride on the investments that other countries had made in technological advances. As for the cost of lower incentives for discoveries by domestic inventors, the Netherlands was never vaunted as a leader in technological innovation, and this is reflected in their low per capita patenting rates both before and after the period without patent laws. They recorded a total of only 4561 patents in the entire period from 1800 to 1869 and, even after adjusting for population, the Dutch patenting rate in 1869 was a mere 13.4 percent of the U.S. patenting rate. Moreover, between 1851 and 1865 88.6 percent of patents in the Netherlands had been granted to foreigners. After the patent laws were reintroduced in 1912, the major beneficiaries were again foreign inventors, who obtained 79.3 of the patents issued in the Netherlands. Thus, the Netherlands had little reason to adopt patent protection, except for external political pressures and the possibility that some types of foreign investment might be deterred.

The case was somewhat different for Switzerland, which was noted for being innovative, but in a narrow range of pursuits. Since the scale of output and markets were quite limited, much of Swiss industry generated few incentives for invention. A number of the industries in which the Swiss excelled, such as hand-made watches, chocolates and food products, were less susceptible to invention that warranted patent protection. For instance, despite the much larger consumer market in the United States, during the entire nineteenth century fewer than 300 U.S. patents related to chocolate composition or production. Improvements in pursuits such as watch-making could be readily protected by trade secrecy as long as the industry remained artisanal. However, with increased mechanization and worker mobility, secrecy would ultimately prove to be ineffective, and innovators would be unable to appropriate returns without more formal means of exclusion.

According to contemporary observers, the Swiss resolved to introduce patent legislation not because of a sudden newfound sense of morality, but because they feared that American manufacturers were surpassing them as a result of patented innovations in the mass production of products such as boots, shoes and watches. Indeed, before 1890, American inventors obtained more than 2068 patents on watches, and the U.S. watch making industry benefited from mechanization and strong economies of scale that led to rapidly falling prices of output, making them more competitive internationally. The implications are that the rates of industrial and technical progress in the United States were more rapid, and technological change was rendering artisanal methods obsolete in products with mass markets. Thus, the Swiss endogenously adopted patent laws because of falling competitiveness in their key industrial sectors.

What was the impact of the introduction of patent protection in Switzerland? Foreign inventors could obtain patents in the United States regardless of their domestic legislation, so we can approach this question tangentially by examining the patterns of patenting in the United States by Swiss residents before and after the 1888 reforms. Between 1836 and 1888, Swiss residents obtained a grand total of 585 patents in the United States. Fully a third of these patents were for watches and music boxes, and only six were for textiles or dyeing, industries in which Switzerland was regarded as competitive early on. Swiss patentees were more oriented to the international market, rather than the small and unprotected domestic market where they could not hope to gain as much from their inventions. For instance, in 1872 Jean-Jacques Mullerpack of Basel collaborated with Leon Jarossonl of Lille, France to invent an improvement in dyeing black with aniline colors, which they assigned to William Morgan Brown of London, England. Another Basel inventor, Alfred Kern, assigned his 1883 patent for violet aniline dyes to the Badische Anilin and Soda Fabrik of Mannheim, Germany.

After the patent reforms, the rate of Swiss patenting in the United States immediately increased. Swiss patentees obtained an annual average of 32.8 patents in the United States in the decade before the patent law was enacted in Switzerland. After the Swiss allowed patenting, this figure increased to an average of 111 each year in the following six years, and in the period from 1895 to 1900 a total of 821 Swiss patents were filed in the United States. The decadal rate of patenting per million residents increased from 111.8 for the ten years up to the reforms, to 451 per million residents in the 1890s, 513 in the 1900s, 458 in the 1910s and 684 in the 1920s. U.S. statutes required worldwide novelty, and patents could not be granted for discoveries that had been in prior use, so the increase was not due to a backlog of trade secrets that were now patented.

Moreover, the introduction of Swiss patent laws also affected the direction of inventions that Swiss residents patented in the United States. After the passage of the law, such patents covered a much broader range of inventions, including gas generators, textile machines, explosives, turbines, paints and dyes, and drawing instruments and lamps. The relative importance of watches and music boxes immediately fell from about a third before the reforms to 6.2 percent and 2.1 percent respectively in the 1890s and even further to 3.8 percent and 0.3 percent between 1900 and 1909. Another indication that international patenting was not entirely unconnected to domestic Swiss inventions can be discerned from the fraction of Swiss patents (filed in the U.S.) that related to process innovations. Before 1888, 21 percent of the patent specifications mentioned a process. Between 1888 and 1907, the Swiss statutes included the requirement that patents should include mechanical models, which precluded patenting of pure processes. The fraction of specifications that mentioned a process fell during the period between 1888 and 1907, but returned to 22 percent when the restriction was modified in 1907.

In short, although the Swiss experience is often cited as proof of the redundancy of patent protection, the limitations of this special case should be taken into account. The domestic market was quite small and offered minimal opportunity or inducements for inventors to take advantage of economies of scale or cost-reducing innovations. Manufacturing tended to cluster in a few industries where innovation was largely irrelevant, such as premium chocolates, or in artisanal production that was susceptible to trade secrecy, such as watches and music boxes. In other areas, notably chemicals, dyes and pharmaceuticals, Swiss industries were export-oriented, but even today their output tends to be quite specialized and high-valued rather than mass-produced. Export-oriented inventors were likely to have been more concerned about patent protection in the important overseas markets, rather than in the home market. Thus, between 1888 and 1907, although Swiss laws excluded patents for chemicals, pharmaceuticals and dyes, 20.7 percent of the Swiss patents filed in the United States were for just these types of inventions. The scanty evidence on Switzerland suggests that the introduction of patent rights was accompanied by changes in the rate and direction of inventive activity. In any event, both the Netherlands and Switzerland featured unique circumstances that seem to hold few lessons for developing countries today.

The Patent System in the United States

The United States stands out as having established one of the most successful patent systems in the world. Over six million patents have been issued since 1790, and American industrial supremacy has frequently been credited to its favorable treatment of inventors and the inducements held out for inventive activity. The first Article of the U.S. Constitution included a clause to “promote the Progress of Science and the useful Arts by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Congress complied by passing a patent statute in April 1790. The United States created in 1836 the first modern patent institution in the world, a system whose features differed in significant respects from those of other major countries. The historical record indicates that the legislature’s creation of a uniquely American system was a deliberate and conscious process of promoting open access to the benefits of private property rights in inventions. The laws were enforced by a judiciary which was willing to grapple with difficult questions such as the extent to which a democratic and market-oriented political economy was consistent with exclusive rights. Courts explicitly attempted to implement decisions that promoted economic growth and social welfare.

The primary feature of the “American system” is that all applications are subject to an examination for conformity with the laws and for novelty. An examination system was set in place in 1790, when a select committee consisting of the Secretary of State (Thomas Jefferson), the Attorney General and the Secretary of War scrutinized the applications. These duties proved to be too time-consuming for highly ranked officials who had other onerous duties, so three years later it was replaced by a registration system. The validity of patents was left up to the district courts, which had the power to set in motion a process that could end in the repeal of the patent. However by the 1830s this process was viewed as cumbersome and the statute that was passed in 1836 set in place the essential structure of the current patent system. In particular, the 1836 Patent Law established the Patent Office, whose trained and technically qualified employees were authorized to examine applications. Employees of the Patent Office were not permitted to obtain patent rights. In order to constrain the ability of examiners to engage in arbitrary actions, the applicant was given the right to file a bill in equity to contest the decisions of the Patent Office with the further right of appeal to the Supreme Court of the United States.

American patent policy likewise stands out in its insistence on affordable fees. The legislature debated the question of appropriate fees, and the first patent law in 1790 set the rate at the minimal sum of $3.70 plus copy costs. In 1793 the fees were increased to $30, and were maintained at this level until 1861. In that year, they were raised to $35, and the term of the patent was changed from fourteen years (with the possibility of an extension) to seventeen years (with no extensions.) The 1869 Report of the Commissioner of Patents compared the $35 fee for a US patent to the significantly higher charges in European countries such as Britain, France, Russia ($450), Belgium ($420) and Austria ($350). The Commissioner speculated that both the private and social costs of patenting were lower in a system of impartial specialized examiners, than under a system where similar services were performed on a fee-per-service basis by private solicitors. He pointed out that in the U.S. the fees were not intended to exact a price for the patent privilege or to raise revenues for the state – the disclosure of information was the sole price for the patent property right – rather, they were imposed merely to cover the administrative expenses of the Office.

The basic parameters of the U.S. patent system were transparent and predictable, in itself an aid to those who wished to obtain patent rights. In addition, American legislators were concerned with ensuring that information about the stock of patented knowledge was readily available and diffused rapidly. As early as 1805 Congress stipulated that the Secretary of State should publish an annual list of patents granted the preceding year, and after 1832 also required the publication in newspapers of notices regarding expired patents. The Patent Office itself was a source of centralized information on the state of the arts. However, Congress was also concerned with the question of providing for decentralized access to patent materials. The Patent Office maintained repositories throughout the country, where inventors could forward their patent models at the expense of the Patent Office. Rural inventors could apply for patents without significant obstacles, because applications could be submitted by mail free of postage.

American laws employed the language of the English statute in granting patents to “the first and true inventor.” Nevertheless, unlike in England, the phrase was used literally, to grant patents for inventions that were original in the world, not simply within U.S. borders. American patent laws provided strong protection for citizens of the United States, but varied over time in its treatment of foreign inventors. Americans could not obtain patents for imported discoveries, but the earliest statutes of 1793, 1800 and 1832, restricted patent property to citizens or to residents who declared that they intended to become citizens. As such, while an American could not appropriate patent rights to a foreign invention, he could freely use the idea without any need to bear licensing or similar costs that would otherwise have been due if the inventor had been able to obtain a patent in this country. In 1836, the stipulations on citizenship or residency were removed, but were replaced with discriminatory patent fees: foreigners could obtain a patent in the U.S. for a fee of three hundred dollars, or five hundred if they were British. After 1861 patent rights (with the exception of caveats) were available to all applicants on the same basis without regard to nationality.

The American patent system was based on the presumption that social welfare coincided with the individual welfare of inventors. Accordingly, legislators rejected restrictions on the rights of American inventors. However, the 1832 and 1836 laws stipulated that foreigners had to exploit their patented invention within eighteen months. These clauses seem to have been interpreted by the courts in a fairly liberal fashion, since alien patentees “need not prove that they hawked the patented improvement to obtain a market for it, or that they endeavored to sell it to any person, but that it rested upon those who sought to defeat the patent to prove that the plaintiffs neglected or refused to sell the patented invention for reasonable prices when application was made to them to purchase.” Such provisions proved to be temporary aberrations and were not included in subsequent legislation. Working requirements or compulsory licenses were regarded as unwarranted infringements of the rights of “meritorious inventors,” and incompatible with the philosophy of U.S. patent grants. Patentees were not required to pay annuities to maintain their property, there were no opposition proceedings, and once granted a patent could not be revoked unless there was proven evidence of fraud.

One of the advantages of a system that secures property rights is that it facilitates contracts and trade. Assignments provide a straightforward index of the effectiveness of the American system, since trade in inventions would hardly proliferate if patent rights were uncertain or worthless. An extensive national network of licensing and assignments developed early on, aided by legal rulings that overturned contracts for useless or fraudulent patents. In 1845 the Patent Office recorded 2,108 assignments, which can be compared to the cumulative stock of 7188 patents that were still in force in that year. By the 1870s the number of assignments averaged over 9000 assignments per year, and this increased in the next decade to over 12,000 transactions recorded annually. This flourishing market for patented inventions provided an incentive for further inventive activity for inventors who were able to appropriate the returns from their efforts, and also linked patents and productivity growth.

Property rights are worth little unless they can be legally enforced in a consistent, certain, and predictable manner. A significant part of the explanation for the success of the American intellectual property system relates to the efficiency with which the laws were interpreted and implemented. United States federal courts from their inception attempted to establish a store of doctrine that fulfilled the intent of the Constitution to secure the rights of intellectual property owners. The judiciary acknowledged that inventive efforts varied with the extent to which inventors could appropriate the returns on their discoveries, and attempted to ensure that patentees were not unjustly deprived of the benefits from their inventions. Numerous reported decisions before the early courts declared that, rather than unwarranted monopolies, patent rights were “sacred” and to be regarded as the just recompense to inventive ingenuity. Early courts had to grapple with a number of difficult issues, such as the appropriate measure of damages, disputes between owners of conflicting patents, and how to protect the integrity of contracts when the law altered. Changes inevitably occurred when litigants and judiciary both adapted to a more complex inventive and economic environment. However, the system remained true to the Constitution in the belief that the defense of rights in patented invention was important in fostering industrial and economic development.

Economists such as Joseph Schumpeter have linked market concentration and innovation, and patent rights are often felt to encourage the establishment of monopoly enterprises. Thus, an important aspect of the enforcement of patents and intellectual property in general depends on competition or antitrust policies. The attitudes of the judiciary towards patent conflicts are primarily shaped by their interpretation of the monopoly aspect of the patent grant. The American judiciary in the early nineteenth century did not recognize patents as monopolies, arguing that patentees added to social welfare through innovations which had never existed before, whereas monopolists secured to themselves rights that already belong to the public. Ultimately, the judiciary came to openly recognize that the enforcement and protection of all property rights involved trade-offs between individual monopoly benefits and social welfare.

The passage of the Sherman Act in 1890 was associated with a populist emphasis on the need to protect the public from corporate monopolies, including those based on patent protection, and raised the prospect of conflicts between patent policies and the promotion of social welfare through industrial competition. Firms have rarely been charged directly with antitrust violations based on patent issues. At the same time, a number of landmark restraint of trade lawsuits have involved technological innovators. In the early decades of the 20th century these included innovative enterprises such as John Deere & Co., American Can and International Harvester, through to the numerous cases since 1970 against IBM, Xerox, Eastman Kodak and, most recently, Intel and Microsoft. The evidence suggests that, holding other factors constant, more innovative firms and those with larger patent stocks are more likely to be charged with antitrust violations. A growing fraction of cases involve firms jointly charged with antitrust violations that are linked to patent based market power and to concerns about “innovation markets.”

The Japanese Patent System

Japan emerged from the Meiji era as a follower nation which deliberately designed institutions to try to emulate those of the most advanced industrial countries. Accordingly, in 1886 Takahashi Korekiyo was sent on a mission to examine patent systems in Europe and the United States. The Japanese envoy was not favorably impressed with the European countries in this regard. Instead, he reported: ” … we have looked about us to see what nations are the greatest, so that we could be like them; … and we said, `What is it that makes the United States such a great nation?’ and we investigated and we found it was patents, and we will have patents.” The first national patent statute in Japan was passed in 1888, and copied many features of the U.S. system, including the examination procedures.

However, even in the first statute, differences existed that reflected Japanese priorities and the “wise eclecticism of Japanese legislators.” For instance, patents were not granted to foreigners, protection could not be obtained for fashion, food products, or medicines, patents that were not worked within three years could be revoked, and severe remedies were imposed for infringement, including penal servitude. After Japan became a signatory of the Paris Convention a new law was passed in 1899, which amended existing legislation to accord with the agreements of the Convention, and extended protection to foreigners. The influence of the German laws were evident in subsequent reforms in 1909 (petty or utility patents were protected) and 1921 (protection was removed from chemical products, work for hire doctrines were adopted, and an opposition procedure was introduced). The Act of 1921 also permitted the state to revoke a patent grant on payment of appropriate compensation if it was deemed in the public interest. Medicines, food and chemical products could not be patented, but protection could be obtained for processes relating to their manufacture.

The modern Japanese patent system is an interesting amalgam of features drawn from the major patent institutions in the world. Patent applications are filed, and the applicants then have seven years within which they can request an examination. Before 1996 examined patents were published prior to the actual grant, and could be opposed before the final grant; but at present, opposition can only occur in the first six months after the initial grant. Patents are also given for utility models or incremental inventions which are required to satisfy a lower standard of novelty and nonobviousness and can be more quickly commercialized. It has been claimed that the Japanese system favors the filing of a plethora of narrowly defined claims for utility models that build on the more substantive contributions of patent grants, leading to the prospect of an anti-commons through “patent flooding.” Others argue that utility models aid diffusion and innovation in the early stages of the patent term, and that the pre-grant publication of patent specifications also promotes diffusion.

Harmonization of International Patent Laws

Today very few developed countries would seriously consider eliminating statutory protection for intellectual property, but in the second half of the nineteenth century the “patent controversy” pitted advocates of patent rights against an effective abolitionist movement. For a short period the latter group was strong enough to obtain support in favor of dismantling the patent systems in countries such as England, and in 1863 the Congress of German Economists declared “patents of invention are injurious to common welfare.” The movement achieved its greatest victory in Holland, which repealed its patent legislation in 1869. The abolitionists based their arguments on the benefits of free trade and competition and viewed patents as part of a protectionist strategy analogous to tariffs. Instead of monopoly awards to inventors, their efforts could be rewarded by alternative policies, such as stipends from the government, payments from private industry or associations formed for that purpose, or simply through the lead time that the first inventor acquired over competitors by virtue of his prior knowledge.

The decisive victory of the patent proponents shifted the focus of interest to the other extreme, and led to efforts to attain uniformity in intellectual property rights regimes across countries. Part of the impetus for change occurred because the costs of discordant national rules became more burdensome as the volume of international trade in industrial products grew over time. Americans were also concerned about the lack of protection accorded to their exhibits in the increasingly more prominent World’s Fairs. Indeed, the first international patent convention was held in Austria in 1873, at the suggestion of U.S. policy makers, who wanted to be certain that their inventors would be adequately protected at the International Exposition in Vienna that year. It also yielded an opportunity to protest the provisions in Austrian law which discriminated against foreigners, including a requirement that patents had to be worked within one year or risk invalidation. The Vienna Convention adopted several resolutions, including a recommendation that the United States opposed, in favor of compulsory licenses if they were deemed in the public interest. However, the convention followed U.S. lead and did not approve compulsory working requirements.

International conventions proliferated in subsequent years, and their tenor tended to reflect the opinions of the conveners. Their objective was not to reach compromise solutions that would reflect the needs and wishes of all participants, but rather to promote preconceived ideas. The overarching goal was to pursue uniform international patent laws, although there was little agreement about the finer points of these laws. It became clear that the goal of complete uniformity was not practicable, given the different objectives, ideologies and economic circumstances of participants. Nevertheless, in 1884 the International Union for the Protection of Industrial Property was signed by Belgium, Portugal, France, Guatemala, Italy, the Netherlands, San Salvador, Serbia, Spain and Switzerland. The United States became a member in 1887, and a significant number of developing countries followed suit, including Brazil, Bulgaria, Cuba, the Dominican Republic, Ceylon, Mexico, Trinidad and Tobago and Indonesia, among others.

The United States was the most prolific patenting nation in the world, many of the major American enterprises owed their success to patents and were expanding into international markets, and the U.S. patent system was recognized as the most successful. It is therefore not surprising that patent harmonization implied convergence towards the American model despite resistance from other nations. Countries such as Germany were initially averse to extending equal protection to foreigners because they feared that their domestic industry would be overwhelmed by American patents. Ironically, because its patent laws were the most liberal towards patentees, the United States found itself with weaker bargaining abilities than nations who could make concessions by changing their provisions. The U.S. pressed for the adoption of reciprocity (which would ensure that American patentees were treated as favorably abroad as in the United States) but this principle was rejected in favor of “national treatment” (American patentees were to be granted the same rights as nationals of the foreign country). This likely influenced the U.S. tendency to use bilateral trade sanctions rather than multilateral conventions to obtain reforms in international patent policies.

It was commonplace in the nineteenth century to rationalize and advocate close links between trade policies, protection, and international laws regarding intellectual property. These links were evident at the most general philosophical level, and at the most specific, especially in terms of compulsory working requirements and provisions to allow imports by the patentee. For instance, the 1880 Paris Convention considered the question of imports of the patented product by the patentee. According to the laws of France, Mexico and Tunisia, such importation would result in the repeal of the patent grant. The Convention inserted an article that explicitly ruled out forfeiture of the patent under these circumstances, which led some French commentators to argue that “the laws on industrial property… will be truly disastrous if they do not have a counterweight in tariff legislation.” The movement to create an international patent system elucidated the fact that intellectual property laws do not exist in a vacuum, but are part of a bundle of rights that are affected by other laws and policies.

Conclusion

Appropriate institutions to promote creations in the material and intellectual sphere are especially critical because ideas and information are public goods that are characterized by nonrivalry and nonexclusion. Once the initial costs are incurred, ideas can be reproduced at zero marginal cost and it may be difficult to exclude others from their use. Thus, in a competitive market, public goods may suffer from underprovision or may never be created because of a lack of incentive on the part of the original provider who bears the initial costs but may not be able to appropriate the benefits. Market failure can be ameliorated in several ways, for instance through government provision, rewards or subsidies to original creators, private patronage, and through the creation of intellectual property rights.

Patents allow the initial producers a limited period during which they are able to benefit from a right of exclusion. If creativity is a function of expected profits, these grants to inventors have the potential to increase social production possibilities at lower cost. Disclosure requirements promote diffusion, and the expiration of the temporary monopoly right ultimately adds to the public domain. Overall welfare is enhanced if the social benefits of diffusion outweigh the deadweight and social costs of temporary exclusion. This period of exclusion may be costly for society, especially if future improvements are deterred, and if rent-seeking such as redistributive litigation results in wasted resources. Much attention has also been accorded to theoretical features of the optimal system, including the breadth, longevity, and height of patent and copyright grants.

However, strongly enforced rights do not always benefit the producers and owners of intellectual property rights, especially if there is a prospect of cumulative invention where follow-on inventors build on the first discovery. Thus, more nuanced models are ambivalent about the net welfare benefits of strong exclusive rights to inventions. Indeed, network models imply that the social welfare of even producers may increase from weak enforcement if more extensive use of the product increases the value to all users. Under these circumstances, the patent owner may benefit from the positive externalities created by piracy. In the absence of royalties, producers may appropriate returns through ancillary means, such as the sale of complementary items or improved reputation. In a variant of the durable-goods monopoly problem, it has been shown that piracy can theoretically increase the demand for products by ensuring that producers can credibly commit to uniform prices over time. Also in this vein, price and/or quality discrimination of non-private goods across pirates and legitimate users can result in net welfare benefits for society and for the individual firm. If the cost of imitation increases with quality, infringement can also benefit society if it causes firms to adopt a strategy of producing higher quality commodities.

Economic theorists who are troubled by the imperfections of intellectual property grants have proposed alternative mechanisms that lead to more satisfactory mathematical solutions. Theoretical analyses have advanced our understanding in this area, but such models by their nature cannot capture many complexities. They tend to overlook such factors as the potential for greater corruption or arbitrariness in the administration of alternatives to patents. Similarly, they fail to appreciate the role of private property rights in conveying information and facilitating markets, and their value in reducing risk and uncertainty for independent inventors with few private resources. The analysis becomes even less satisfactory when producers belong to different countries than consumers. Thus, despite the flurry of academic research on the economics of intellectual property, we have not progressed far beyond Fritz Machlup’s declaration that our state of knowledge does not allow to us to either recommend the introduction or the removal of such systems. Existing studies leave a wide area of ambiguity about the causes and consequences of institutional structures in general, and their evolution across time and region.

In the realm of intellectual property, questions from four centuries ago are still current, ranging from its philosophical underpinnings, to whether patents and copyrights constitute optimal policies towards intellectual inventions, to the growing concerns of international political economy. A number of scholars are so impressed with technological advances in the twenty-first century that they argue we have reached a critical juncture where we need completely new institutions. Throughout their history, patent and copyright regimes have confronted and accommodated technological innovations that were no less significant and contentious for their time. An economist from the nineteenth century would have been equally familiar with considerations about whether uniformity in intellectual property rights across countries harmed or benefited global welfare, and whether piracy might be to the advantage of developing countries. Similarly, the link between trade and intellectual property rights that informs the TRIPS (trade-related aspects of intellectual property rights) agreement was quite standard two centuries ago.

Today the majority of patents are filed in developed countries by the residents of developed countries, most notably those of Japan and the United States. The developing countries of the twenty-first century are under significant political pressure to adopt stronger patent laws and enforcement, even though few patents are filed by residents of the developing countries. Critics of intellectual property rights point to costs, such as monopoly rents and higher barriers to entry, administrative costs, outflows of royalty payments to foreign entities, and a lack of indigenous innovation. Other studies, however, have more optimistic findings regarding the role of patents in economic and social development. They suggest that stronger protection can encourage more foreign direct investment, greater access to technology, and increased benefits from trade openness. Moreover, both economic history and modern empirical research indicate that stronger patent rights and more effective markets in invention can, by encouraging and enabling the inventiveness of ordinary citizens of developing countries, help to increase social and economic welfare.

Patents Statistics for France, Britain, the United States and Germany, 1790-1960
YEAR FRANCE BRITAIN U.S. GERMANY
1790 . 68 3 .
1791 34 57 33 .
1792 29 85 11 .
1793 4 43 20 .
1794 0 55 22 .
1795 1 51 12 .
1796 8 75 44 .
1797 4 54 51 .
1798 10 77 28 .
1799 22 82 44 .
1800 16 96 41 .
1801 34 104 44 .
1802 29 107 65 .
1803 45 73 97 .
1804 44 60 84 .
1805 63 95 57 .
1806 101 99 63 .
1807 66 94 99 .
1808 61 95 158 .
1809 52 101 203 .
1810 93 108 223 .
1811 66 115 215 0
1812 96 119 238 2
1813 88 142 181 2
1814 53 96 210 1
1815 77 102 173 10
1816 115 118 206 10
1817 162 103 174 16
1818 153 132 222 18
1819 138 101 156 10
1820 151 97 155 10
1821 180 109 168 11
1822 175 113 200 8
1823 187 138 173 22
1824 217 180 228 25
1825 321 250 304 17
1826 281 131 323 67
1827 333 150 331 69
1828 388 154 368 87
1829 452 130 447 59
1830 366 180 544 57
1831 220 150 573 34
1832 287 147 474 46
1833 431 180 586 76
1834 576 207 630 66
1835 556 231 752 73
1836 582 296 702 65
1837 872 256 426 46
1838 1312 394 514 104
1839 730 411 404 125
1840 947 440 458 156
1841 925 440 490 162
1842 1594 371 488 153
1843 1397 420 493 160
1844 1863 450 478 158
1845 2666 572 473 256
1846 2750 493 566 252
1847 2937 493 495 329
1848 1191 388 583 256
1849 1953 514 984 253
1850 2272 523 883 308
1851 2462 455 752 274
1852 3279 1384 885 272
1853 4065 2187 844 287
1854 4563 1878 1755 276
1855 5398 2046 1881 287
1856 5761 1094 2302 393
1857 6110 2028 2674 414
1858 5828 1954 3455 375
1859 5439 1977 4160 384
1860 6122 2063 4357 550
1861 5941 2047 3020 551
1862 5859 2191 3214 630
1863 5890 2094 3773 633
1864 5653 2024 4630 557
1865 5472 2186 6088 609
1866 5671 2124 8863 549
1867 6098 2284 12277 714
1868 6103 2490 12526 828
1869 5906 2407 12931 616
1870 3850 2180 12137 648
1871 2782 2376 11659 458
1872 4875 2771 12180 958
1873 5074 2974 11616 1130
1874 5746 3162 12230 1245
1875 6007 3112 13291 1382
1876 6736 3435 14169 1947
1877 7101 3317 12920 1604
1878 7981 3509 12345 4200
1879 7828 3524 12165 4410
1880 7660 3741 12902 3960
1881 7813 3950 15500 4339
1882 7724 4337 18091 4131
1883 8087 3962 21162 4848
1884 8253 9983 19118 4459
1885 8696 8775 23285 4018
1886 9011 9099 21767 4008
1887 8863 9226 20403 3882
1888 8669 9309 19551 3923
1889 9287 10081 23324 4406
1890 9009 10646 25313 4680
1891 9292 10643 22312 5550
1892 9902 11164 22647 5900
1893 9860 11600 22750 6430
1894 10433 11699 19855 6280
1895 10257 12191 20856 5720
1896 11430 12473 21822 5410
1897 12550 14210 22067 5440
1898 12421 14167 20377 5570
1899 12713 14160 23278 7430
1900 12399 13710 24644 8784
1901 12103 13062 25546 10508
1902 12026 13764 27119 10610
1903 12469 15718 31029 9964
1904 12574 15089 30258 9189
1905 12953 14786 29775 9600
1906 13097 14707 31170 13430
1907 13170 16272 35859 13250
1908 13807 16284 32735 11610
1909 13466 15065 36561 11995
1910 16064 15269 35141 12100
1911 15593 17164 32856 12640
1912 15737 15814 36198 13080
1913 15967 16599 33917 13520
1914 12161 15036 39892 12350
1915 5056 11457 43118 8190
1916 3250 8424 43892 6271
1917 4100 9347 40935 7399
1918 4400 10809 38452 7340
1919 10500 12301 36797 7766
1920 18950 14191 37060 14452
1921 17700 17697 37798 15642
1922 18300 17366 38369 20715
1923 19200 17073 38616 20526
1924 19200 16839 42584 18189
1925 18000 17199 46432 15877
1926 18200 17333 44733 15500
1927 17500 17624 41717 15265
1928 22000 17695 42357 15598
1929 24000 18937 45267 20202
1930 24000 20888 45226 26737
1931 24000 21949 51761 25846
1932 21850 21150 53504 26201
1933 20000 17228 48807 21755
1934 19100 16890 44452 17011
1935 18000 17675 40663 16139
1936 16700 17819 39831 16750
1937 16750 17614 37738 14526
1938 14000 19314 38102 15068
1939 15550 17605 43118 16525
1940 10100 11453 42323 14647
1941 8150 11179 41171 14809
1942 10000 7962 38514 14648
1943 12250 7945 31101 14883
1944 11650 7712 28091 .
1945 7360 7465 25712 .
1946 11050 8971 21859 .
1947 13500 11727 20191 .
1948 13700 15558 24007 .
1949 16700 20703 35224 .
1950 17800 13509 43219 .
1951 25200 13761 44384 27767
1952 20400 21380 43717 37179
1953 43000 17882 40546 37113
1954 34000 17985 33910 19140
1955 23000 20630 30535 14760
1956 21900 19938 46918 18150
1957 23000 25205 42873 20467
1958 24950 18531 48450 19837
1959 41600 18157 52509 22556
1960 35000 26775 47286 19666

Additional Reading

Khan, B. Zorina. The Democratization of Invention: Patents and Copyrights in American Economic Development. New York: Cambridge University Press, 2005.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Technological Innovation during Early Economic Growth, 1790-1930.” NBER Working Paper No. 10966. Cambridge, MA: December 2004. (Available at www.nber.org.)

Bibliography

Besen, Stanley M., and Leo J. Raskind, “Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5, no. 1 (1991): 3-27.

Bugbee, Bruce. The Genesis of American Patent and Copyright Law. Washington, DC: Public Affairs Press, 1967.

Coulter, Moureen. Property in Ideas: The Patent Question in Mid-Victorian England. Kirksville, MO: Thomas Jefferson Press, 1991

Dutton, H. I. The Patent System and Inventive Activity during the Industrial Revolution, 1750-1852, Manchester, UK: Manchester University Press, 1984.

Epstein, R. “Industrial Inventions: Heroic or Systematic?” Quarterly Journal of Economics 40 (1926): 232-72.

Gallini, Nancy T. “The Economics of Patents: Lessons from Recent U.S. Patent Reform.” Journal of Economic Perspectives 16, no. 2 (2002): 131–54.

Gilbert, Richard and Carl Shapiro. “Optimal Patent Length and Breadth.” Rand Journal of Economics 21 (1990): 106-12.

Gilfillan, S. Colum. The Sociology of Invention. Cambridge, MA: Follett, 1935.

Gomme, A. A. Patents of Invention: Origin and Growth of the Patent System in Britain, London: Longmans Green, 1946.

Harding, Herbert. Patent Office Centenary, London: Her Majesty’s Stationery Office, 1953.

Hilaire-Pérez, Liliane. Inventions et Inventeurs en France et en Angleterre au XVIIIe siècle. Lille: Université de Lille, 1994.

Hilaire-Pérez, Liliane. L’invention technique au siècle des Lumières. Paris: Albin Michel, 2000.

Jeremy, David J., Transatlantic Industrial Revolution: The Diffusion of Textile Technologies between Britain and America, 1790-1830s. Cambridge, MA: MIT Press, 1981.

Khan, B. Zorina. “Property Rights and Patent Litigation in Early Nineteenth-Century America.” Journal of Economic History 55, no. 1 (1995): 58-97.

Khan, B. Zorina. “Married Women’s Property Right Laws and Female Commercial Activity.” Journal of Economic History 56, no. 2 (1996): 356-88.

Khan, B. Zorina. “Federal Antitrust Agencies and Public Policy towards Patents and Innovation.” Cornell Journal of Law and Public Policy 9 (1999): 133-69.

Khan, B. Zorina, “`Not for Ornament': Patenting Activity by Women Inventors.” Journal of Interdisciplinary History 33, no. 2 (2000): 159-95.

Khan, B. Zorina. “Technological Innovations and Endogenous Changes in U.S. Legal Institutions, 1790-1920.” NBER Working Paper No. 10346. Cambridge, MA: March 2004. (available at www.nber.org)

Khan, B. Zorina, and Kenneth L. Sokoloff. “‘Schemes of Practical Utility’: Entrepreneurship and Innovation among ‘Great Inventors’ in the United States, 1790-1865.” Journal of Economic History 53, no. 2 (1993): 289-307.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Entrepreneurship and Technological Change in Historical Perspective.” Advances in the Study of Entrepreneurship, Innovation, and Economic Growth 6 (1993): 37-66.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Two Paths to Industrial Development and Technological Change.” In Technological Revolutions in Europe, 1760-1860, edited by Maxine Berg and Kristine Bruland. London: Edward Elgar, London, 1997.

Khan, B. Zorina, and Kenneth L. Sokoloff. “The Early Development of Intellectual Property Institutions in the United States.” Journal of Economic Perspectives 15, no. 3 (2001): 233-46.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Innovation of Patent Systems in the Nineteenth Century: A Comparative Perspective.” Unpublished manuscript (2001).

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Democratic Invention in Nineteenth-century America.” American Economic Review Papers and Proceedings 94 (2004): 395-401.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Technological Innovation during Early Economic Growth: Evidence from the Great Inventors of the United States, 1790-1930.” In Institutions and Economic Growth, edited by Theo Eicher and Cecilia Garcia-Penalosa. Cambridge, MA: MIT Press, 2006.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “Long-Term Change in the Organization of Inventive Activity.” Science, Technology and the Economy 93 (1996): 1286-92.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “The Geography of Invention in the American Glass Industry, 1870-1925.” Journal of Economic History 60, no. 3 (2000): 700-29.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “Market Trade in Patents and the Rise of a Class of Specialized Inventors in the Nineteenth-century United States.” American Economic Review 91, no. 2 (2001): 39-44.

Landes, David S. Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. Cambridge: Cambridge University Press, 1969.

Lerner, Josh. “Patent Protection and Innovation over 150 Years.” NBER Working Paper No. 8977. Cambridge, MA: June 2002.

Levin, Richard, A. Klevorick, R. Nelson and S. Winter. “Appropriating the Returns from Industrial Research and Development.” Brookings Papers on Economic Activity 3 (1987): 783-820.

Lo, Shih-Tse. “Strengthening Intellectual Property Rights: Evidence from the 1986 Taiwanese Patent Reforms.” Ph.D. diss., University of California at Los Angeles, 2005.

Machlup, Fritz. An Economic Review of the Patent System. Washington, DC: U.S. Government Printing Office, 1958.

Machlup, Fritz. “The Supply of Inventors and Inventions.” In The Rate and Direction of Inventive Activity, edited by R. Nelson. Princeton: Princeton University Press, 1962.

Machlup, Fritz, and Edith Penrose. “The Patent Controversy in the Nineteenth Century.” Journal of Economic History 10, no. 1 (1950): 1-29.

Macleod, Christine. Inventing the Industrial Revolution. Cambridge: Cambridge University Press, 1988.

McCloy, Shelby T. French Inventions of the Eighteenth Century. Lexington: University of Kentucky Press, 1952.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Growth. New York: Oxford University Press, 1990.

Moser, Petra. “How Do Patent Laws Influence Innovation? Evidence from Nineteenth-century World Fairs.” American Economic Review 95, no. 4 (2005): 1214-36.

O’Dell, T. H. Inventions and Official Secrecy: A History of Secret Patents in the United Kingdom, Oxford: Clarendon Press, 1994.

Penrose, Edith. The Economics of the International Patent System. Baltimore: John Hopkins University Press, 1951.

Sáiz González, Patricio. Invención, patentes e innovación en la Espaňa contemporánea. Madrid: OEPM, 1999.

Schmookler, Jacob. “Economic Sources of Inventive Activity.” Journal of Economic History 22 (1962): 1-20.

Schmookler, Jacob. Invention and Economic Growth. Cambridge, MA: Harvard University Press, 1966.

Schmookler, Jacob, and Zvi Griliches. “Inventing and Maximizing.” American Economic Review (1963): 725-29.

Schiff, Eric. Industrialization without National Patents: The Netherlands, 1869-1912; Switzerland, 1850-1907. Princeton: Princeton University Press, 1971.

Sokoloff, Kenneth L. “Inventive Activity in Early Industrial America: Evidence from Patent Records, 1790-1846.” Journal of Economic History 48, no. 4 (1988): 813-50.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Sokoloff, Kenneth L., and B. Zorina Khan. “The Democratization of Invention in during Early Industrialization: Evidence from the United States, 1790-1846.” Journal of Economic History 50, no. 2 (1990): 363-78.

Sutthiphisal, Dhanoos. “Learning-by-Producing and the Geographic Links between Invention and Production.” Unpublished manuscript, McGill University, 2005.

Takeyama, Lisa N. “The Welfare Implications of Unauthorized Reproduction of Intellectual Property in the Presence of Demand Network Externalities.” Journal of Industrial Economics 42, no. 2 (1994): 155-66.

U.S. Patent Office. Annual Report of the Commissioner of Patents. Washington, DC: various years.

Van Dijk, T. “Patent Height and Competition in Product Improvements.” Journal of Industrial Economics 44, no. 2 (1996): 151-67.

Vojacek, Jan. A Survey of the Principal National Patent Systems. New York: Prentice-Hall, 1936.

Woodcroft, Bennet. Alphabetical Index of Patentees of Inventions [1617-1852]. New York: A. Kelley, 1854, reprinted 1969.

Woodcroft, Bennet. Titles of Patents of Invention: Chronologically Arranged from March 2, 1617 to October 1, 1852. London: Queen’s Printing Office, 1854.

Citation: Khan, B. “An Economic History of Patent Institutions”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-patent-institutions/

The Panic of 1907

Jon Moen, University of Mississippi

The Panic of 1907 was the last and most severe of the bank panics that plagued the National Banking Era of the United States. Severe panics also happened in 1873, 1884, 1890, and 1893, although numerous other smaller financial crises cropped up from time to time. Bank panics were characterized by the widespread appearance of bank runs, attempts by depositors to simultaneously withdraw their deposits from the banking system. Because banks did not (and still do not) keep a 100% reserve against deposits, it paid to be near the front of the line of depositors demanding their money when a panic blew up. What sets 1907 apart from earlier panics was that the crisis focused on the trusts companies in New York City. The National Banking Era lasted from 1863 to 1914, when Congress, in part to eliminate these recurring panics, created the Federal Reserve System.

What Caused the Panic?

Why would a panic happen? One answer that is really not of much help is that all depositors suddenly became so concerned about the solvency or liquidity of their bank that they decided they would rather hold cash than deposits (Diamond and Dybvig 1983; Jacklin and Bhattacharya 1988). (Solvency refers to the relationship between assets and liabilities; an insolvent bank has liabilities greater than its assets. Liquidity refers to the ease with which assets can be converted to cash without loss of value; liquid assets are close to cash or have a market in which they can be easily and quickly sold.) Whatever the deeper psychological reasons might be, it is not hard to identify some immediate shocks to depositor confidence that sparked the Panic of 1907. Such a shock occurred on October 16, 1907, when F. Augustus Heinze’s scheme to corner the stock of United Copper Company failed. Although United Copper was only a moderately important firm, the collapse of Heinze’s scheme, exposed an intricate network of interlocking directorates across banks, brokerage houses, and trust companies in New York City. Contemporary observers like O.M.W. Sprague (1910) believed that the discovery of the close associations between bankers and stockbrokers seriously raised the anxiety of already nervous depositors.

During the National Banking Era the New York money market faced seasonal variations in interest rates and liquidity resulting from the transportation of crops from the interior of the United States to New York and then to Europe. The outflow of capital necessary to finance crop shipments from the Midwest to the East Coast in September or October usually left the New York City money market squeezed for cash. As a result, short-term interest rates in New York City were prone to spike upward in autumn. Seasonal increases in economic activity were not matched by an increase in the money supply because existing domestic monetary structures tended to make the money supply “inelastic.” Usually gold would flow into the United States from Europe in response to the high seasonal interest rates, increasing the monetary base of the United States and easing the liquidity squeeze somewhat.

Under more normal financial conditions, the discovery of a scheme like Heinze’s might not have sparked a panic, but conditions were not normal in the Fall of 1907. The economy had been slowing, the stock market had been in decline since early 1907, and the supply of credit had been contracting causing rising interest rates. Tight credit markets in Europe, particularly in England where the Bank of England had been raising its bank rate since December 1906, have been implicated in setting an especially precarious financial stage in 1907. Therefore, the normal seasonal inflows of foreign gold were not happening in 1907 as European interest rates rose. Because there was no central bank or reliable lender of last resort during the National Banking Era, there was no reliable way to expand the money supply in the United States.

Heinze’s extensive involvement in New York banking was subsequently linked to one of his close and suspicious associates, C.F. Morse. Morse controlled three national banks directly and was a director of four other banks. After the failure of his attempt to corner United Copper stock, Heinze was forced to resign from the presidency of Mercantile National Bank, and worried depositors began a run on the bank. Depositors began runs on several of the banks controlled by Morse as well. The New York Clearinghouse, a private organization formed by banks to centralize check clearing (a check clears when it is finally presented to the bank on which it was originally written for payment in cash or reserves), had its examiner analyze the banks’ assets. On the basis of the examination, the Clearinghouse authorities stated that they would support Mercantile and the other banks on the condition that Heinze and Morse retire from banking in New York. On Monday, October 21, Mercantile National resumed business with new management, and the runs on these national banks ceased.

The Panic at the Trust Companies

By October 21, nothing resembling a systemic panic, however, had yet stricken the New York banking system. Depositors at Mercantile Bank withdrew funds but redeposited them in other New York City banks. Many accounts of the Panic of 1907 cite Monday, October 21, as the beginning of the crisis among the trust companies and the true onset of the panic. Late that Monday afternoon the National Bank of Commerce announced that it would stop clearing checks for the Knickerbocker Trust Company, the third largest trust in New York City. Vincent Carosso (1987), however, suggests that the run on Knickerbocker began Friday, October 18, when Charles Barney, the Knickerbocker president, was reported to have been involved in Heinze’s copper corner. Drawing from the private papers of J.P. Morgan, Carosso notes that the National Bank of Commerce had been extending loans to the Knickerbocker Trust to hold off depositor runs. National Bank of Commerce’s refusal to continue acting as a clearing agent for Knickerbocker was interpreted as a vote of no confidence that seriously alarmed Knickerbocker depositors.

On Monday evening, October 21, J.P. Morgan organized a meeting of trust company executives to discuss ways to halt the panic. Morgan, along with James Stillman of National City Bank and George Baker of First National Bank, had earlier organized an informal team to oversee relief efforts during the panic at the national banks (Carosso 1987). Assisting them were several young financial experts responsible for evaluating the assets of troubled institutions and indicating which ones were worthy of aid. Chief among these investigators was Benjamin Strong of Banker’s Trust Company, who would later become president of the Federal Reserve Bank of New York. Strong reported to Morgan that he was unable to evaluate Knickerbocker’s financial condition in the short time before funds would have to be committed. Unwilling to act on limited information, Morgan decided not to aid the trust; this decision kept other institutions from offering substantial aid as well. It appears that at first Morgan was uninterested in aiding the trust companies in general, as he felt they should pay for their risky behavior. It is not clear that they were riskier; perhaps Morgan just did not want to aid intermediaries competing with the banks. On October 22 Knickerbocker underwent a run for three hours before suspending operations just after noon, having paid out $8 million in cash.

Ominously, next to the front-page article describing the run on the Knickerbocker Trust in the Wednesday, October 23, edition of the New York Times was a headline describing the Trust Company of America, the second largest trust company in New York City, as the current “sore point” in the panic. By attracting attention to the Trust Company of America, the newspaper article greatly exacerbated the serious run on it. Barney, who was president of Knickerbocker, was also a member of the board of directors of Trust Company of America.

On Tuesday, October 22, withdrawals from Trust Company of America were approximately $1.5 million; on the Wednesday when the ill-timed article was published depositors claimed another $13 million of nearly $60 million in total deposits. Withdrawals from Trust Company of America on Thursday, October 24, were a further $8 million to $9 million. During the span of the run, which lasted two weeks, Trust Company of America reportedly paid out $47.5 million in deposits.

Saving the Trusts

Realizing that the failure of Trust Company of America and Lincoln Trust, another trust company whose distress had been publicized, would endanger the New York money market, five leading trust company presidents formed a committee to assist trusts needing cash. Not all trusts were willing to cooperate, however, so the committee was not able to collect enough cash to provide reliable relief for a trust company facing a sudden run. They petitioned Morgan for more help.

Morgan, Baker, and Stillman knew that aid for Trust Company of America was not certain and saw that the collapse of several large trusts would be disastrous. Strong had arrived at Trust Company of America sometime after 2:00 A.M. Wednesday and had begun to appraise its assets. That afternoon he reported to Morgan that Trust Company was basically sound and deserved assistance. Morgan channeled about $3 million to Trust Company just before closing time, which allowed it to resume business the next day.

Aid began to come from several other sources. J.D. Rockefeller deposited $10 million with the Union Trust to help the trusts and announced his support for Morgan. Secretary of the Treasury George Cortelyou and the major New York financiers met on the evening of Wednesday, October 23, and discussed plans to combat the crisis. Cortelyou deposited $25 million of the Treasury’s funds in national banks the following morning. Between October 21 and October 31, the Treasury deposited a total of $37.6 million in New York national banks and provided $36 million in small bills to meet runs. By the middle of November, however, the U.S. Treasury’s working capital had dwindled to $5 million. Thus Treasury could not and did not contribute much more aid during the rest of the panic (Timberlake 1978, 1993).

The Connection to the Stock Market

Meanwhile, by Thursday, October 24, call money on the New York Stock Exchange was nearly unobtainable. Call money was money lent for the purchase of stock equity, with the stock itself serving as collateral for the loans. Call loans could be called in at any time. The opening rate for call money was 6 percent, but exchange president Ransom H. Thomas noticed a serious scarcity of money. At one point that morning a bid of 60 percent went out for call money. Yet, even at that exorbitant rate, no money was offered. The last recorded transaction of the day was at the opening rate of 6 percent. Fearing a total collapse of the stock market, Thomas called Stillman for aid. Stillman referred Thomas to Morgan, who was in control of most of the available funds. While Thomas traveled to Morgan’s office, the call money rate on the exchange reached 100 percent.

On October 25 another money pool was required. About $10 million came from the Morgan group, $2 million from First National, and $500,000 from Kuhn, Loeb, and Company. This time, however, Morgan allowed the market to determine the call money rate, which remained at nearly 50 percent most of the day. The Morgan funds had restrictions designed to stifle speculation. First, no margin sales were allowed-only cash sales for investment. Also, the full amount of Morgan money was not released until afternoon. Throughout the stock exchange crisis, both Trust Company of America and Lincoln Trust were supported by Morgan’s efforts. The Trust Company of America and Lincoln Trust required further aid, and Morgan convinced other trust presidents to support a $25 million loan for the troubled institutions. The funds were provided on November 4 after several nights of negotiation. The panic began to ease when the trust company presidents organized by Morgan agreed to form a consortium to support trust companies facing runs.

The most severe runs on deposits in New York City were limited to the trust companies, not the state or national banks. Deposits contracted at all the trusts in New York, not just the prominent ones like Knickerbocker (Moen and Tallman 1992). This raises a question. If only the trust companies were being run by depositors, why would the banks want to help their competitors? The stock market provides a key link. Runs on deposits forced trusts to liquidate their most liquid assets, call loans on the stock market. Large-scale liquidation of call loans depressed the value of stocks because the stock serving as collateral for the call loan had to be sold quickly to pay off the loan. The sudden increase in the supply of stock would depress stock prices. Given the predominance of national banks in the call loan market, extensive liquidation of call loans by trusts threatened the assets of national banks. National banks and the clearinghouse were aware that they were economically linked to the trust companies through the call loan market. They realized that runs on the trusts could spread to the national banks through the call loan market, giving the banks a strong financial incentive to help the trusts stop the panic, even if they had no legal interest.

The New York Clearinghouse Association Steps In

While financiers were working out the crises with the trusts and the call loan market, money and reserves had become increasingly tight at banks. On October 26 the Clearinghouse issued clearinghouse loan certificates as an artificial mechanism to increase the supply of currency available to the public, a tactic it had used in earlier financial crises in 1873 and 1893 (Timberlake 1984; Gorton 1985; Tallman 1988).

Although the national banking system offered no legal mechanism to increase the supply of currency quickly, loan certificates provided an informal (if unlawful) way to free up a sizable amount of cash. In normal business banks used currency as reserve assets and as the medium to clear accounts with each other. Clearinghouse loan certificates enabled banks to convert their noncash assets into cash during a crisis: banks would substitute loan certificates for currency in their clearings, thus releasing the currency to pay depositors who demanded cash. In effect, loan certificates were IOUs between banks that were backed by eligible assets of the bank. Loan certificates were not recognized as currency by the public or by depositors, and they could legally circulate only among banks, not the public. A. Piatt Andrew (1908) noted, however, that during the 1907 Panic, a number of substitutes for cash were employed in transactions.

Following the first issue of clearinghouse loan certificates on October 26 during the 1907 Panic, loans initially increased by about $11 million. During the next three weeks more than $110 million in certificates were issued in New York City. Over the entire course of the Panic, nearly $500 million in currency substitutes circulated throughout the country as a “principal means of payment,” according to Andrew (1910, 515). Sprague has criticized the clearinghouse for delaying the use of loan certificates until after the panic was well under way. He believed that issuing certificates as soon as the crisis struck the trusts would have calmed the market by allowing banks to accommodate their depositors more quickly. Aid would have gone directly to troubled banks and trusts, and the cumbersome device of money pools could have been avoided. Fewer loans would have been called in, thus reducing the tension at the stock exchange (Sprague 1910, 257-58).

The clearinghouse also restricted the convertibility of demand deposits into cash — an action, which, like issuing loan certificates to the public, was illegal. The restriction, referred to as “suspension of payments,” increased the costs of doing business by making payments more difficult. Nevertheless, banks continued other business activities such as accepting deposits and clearing checks. The suspension of payments spread across the country through the system of correspondent banks. Although convertibility was widely restored by the beginning of January, in a few instances loan certificates and other substitutes for cash circulated as late as March 1908.

Why Were There Runs on Trust Companies?

There were three main types of financial intermediaries during the National Banking Era: national banks, state banks, and later in the period trust companies. It is not surprising that trust companies were the focal point of the panic. In New York, assets at the trust companies had grown phenomenally between 1890 and 1910, increasing 244 percent during the 10 years ending in 1907, from $396.7 million to $1,394.0 million. In contrast, national bank assets had grown 97 percent, from $915.2 million to $1,800.0 million, while state-chartered bank assets had grown 82 percent, from $297 million to $541.0 million (Barnett 1911, 234-35). Thus the manner in which trust companies used their assets greatly affected the New York money market (Moen and Tallman 1992).

Trust companies were much less regulated than national or state banks in New York. In 1906 New York State instituted a requirement that trusts maintain reserves at 15 percent of deposits, but only 5 percent of deposits needed to be kept as currency in the vault. Before that time trusts simply kept whatever reserves they felt necessary to conduct business. National bank notes were adequate as cash reserves for trusts while national banks in central reserve cities like New York were required to keep a 25 percent reserve in the form of specie or legal tender (greenbacks or treasury notes but not national bank notes).

Trusts were originally rather conservative institutions, managing estates, holding securities, and taking deposits, but by 1907 trusts were performing most of the functions of banks except issuing bank notes. Many of the larger trusts specialized in underwriting security issues. Others wrote mortgages or invested directly in real estate activities barred or limited for national banks. New York City trusts had a higher proportion of collateralized loans than did New York City national banks. Conventional banking wisdom associated collateralized loans with riskier investments and riskier borrowers. The trusts, therefore, had an asset portfolio that may have been riskier than those of other intermediaries.

National and private banks found the investment banking functions of trusts so useful that many of them gained direct or indirect control of a trust through holding companies or by placing their associates on a trust’s board of directors. In many instances a bank and its affiliated trust operated in the same building.

Trusts appear to have provided intermediary functions different from those of banks. Although the volume of deposits subject to check at trusts was similar to that at banks, trusts had many fewer checks (in number and value) written against their demand deposits than did banks. The check clearings of trusts were only about 7 percent of the volume of those at banks. Trusts were not then like commercial banks, whose assets are used as transactions balances by individual depositors or firms. National banks were part of a network of regional banks that had correspondent relationships to expedite interregional transactions (James 1978, 40). Trusts were not part of the correspondent banking system, so their deposits were more local and less directly subject to the recurring seasonal strains on funds.

Conclusion

The New York Clearinghouse had detailed knowledge of the quality of bank assets in New York. A similar, formal organization of trust companies would have had current knowledge of the assets and liabilities of its member trusts. Such an organization could have more readily assessed the situation at trust companies facing runs than the ad hoc consortiums and money pools organized by Morgan. The ability of a clearinghouse to shield its members from runs on deposits was clearly demonstrated by the Chicago Clearinghouse in 1907, where there were virtually no runs on deposits. In Chicago the trust companies, similar in structure to those in New York, were members of the clearinghouse and were not singled out by depositors. A lender of last resort covering all intermediaries in the payments system certainly adds stability to the system. JP Morgan and others, however, may have profited from earlier panics by lending money to otherwise desperate bankers. This is the popular view of their actions in 1907. The 1907 Panic, however, may have turned out to be far more severe than anticipated. Even if Morgan made money after the fact in 1907, the expectation of higher default risk made the possibility of lending in future panics unattractive. Perhaps this is what was realized by the New York bankers, causing them to abandon their role as de facto lenders of last resort and setting the groundwork for the establishment of the Federal Reserve System.

References:

Much of this article is based on a review article from the Federal Reserve Bank of Atlanta, although I have updated some of our economic interpretations of the panic, particularly on matters related to liquidity and solvency. The complete reference to the review article is:

Moen, Jon and Ellis Tallman. “Lessons from the Panic of 1907.” Federal Reserve Bank of Atlanta Economic Review 75 (May/June 1990): 2-13.

Other important references cited or used are:

Andrew, A. Piatt. “Substitutes for Cash in the Panic of 1907.” Quarterly Journal of Economics 23, (August 1908): 497-516.

Barnett, George E. State Banks and Trust Companies since the Passage of the National Bank Act. Washington, D.C.: U.S. Government Printing Office, 1911.

Calomiris, Charles and Gary Gorton. “The Origins of Bank Panics: Models, Facts, and Bank Regulations.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Carosso, Vincent P. The Morgans: Private International Bankers, 1854-1913. Cambridge, MA: Harvard University Press, 1987.

The Commercial and Financial Chronicle. Various issues from November 7, 1907 through January 8, 1908.

Diamond, Douglas W., and Philip H. Dybvig. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political Economy 91 (June 1983): 401-19.

Friedman, Milton, and Anna 1. Schwartz. A Monetary History of the United States: 1867-1960. Princeton, N.J.: Princeton University Press, 1963.

Gorton, Gary. “Clearinghouses and the Origins of Central Banking in the United States.” Journal of Economic History 45 (June 1985): 277-84.

Jacklin, Charles J., and Sudipto Bhattacharya. “Distinguishing Panics and Information-Based Bank Runs: Welfare and Policy Implications.” Journal of Political Economy 96 (June 1988): 568-92.

James, John. Money and Capital Markets in Postbellum America. Princeton, NJ: Princeton University Press, 1978.

Moen, Jon, and Ellis W. Tallman. “The Bank Panic of 1907: The Role of the Trust Companies.” Journal of Economic History 52 (September 1992): 611-630.

Moen, Jon, and Ellis W. Tallman “Clearinghouse Membership and Deposit Contraction during the Panic of 1907.” Journal of Economic History 60 (March 2000): 145-163.

Sprague, Oliver M.W. “The American Crisis of 1907.” The Economic Journal 18 (September 1908): 353-72.

Sprague, Oliver M.W. History of Crises under the National Banking System. National Monetary Commission. Washington, D.C.: U.S. Government Printing Office, 1910.

Tallman, Ellis W. “Some Unanswered Questions about Banking Panics.” Federal Reserve Bank of Atlanta Economic Review 73 (November/December 1988): 2-21.

Timberlake, Richard Henry. The Origins of Central Banking in the United States. Cambridge, MA: Harvard University Press, 1978.

Timberlake, Richard Henry. Monetary Policy in the United States. Chicago: University of Chicago Press, 1993.

Timberlake, Richard Henry. “The Central Banking Role of Clearinghouse Associations.” Journal of Money, Credit and Banking 16 (February 1984): 1-15.

Citation: Moen, Jon. “Panic of 1907″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-panic-of-1907/

Economic Histories of the Opium Trade

Siddharth Chandra, University of Pittsburgh

The history of opium has attracted the attention of historians for decades, and in a way that the history of few other commodities has. Because a lot has already been written on the opium trade in various parts of the world (for a sampling, see the citations at the end of this article), this piece will focus on the history of the opium trade through the lens of the economic historian. In other words, it will address the question “Why is opium of special interest to economic historians?” Following a brief background of the opium trade, a discussion of this question is provided with a focus on Asia and with references to more detailed and case-specific sources.

Opium: A Brief Background

Opium is produced from the opium poppy. The primary narcotic agent in opium is morphine. The morphine-rich sap of the poppy is derived from incisions made in the bulbous portion of the flower. The harvesting of the sap is an extremely labor-intensive process. The sap is then boiled down (or gradually dried to about ten percent of its original water content) to make opium.

Opium belongs to the narcotic class of drugs, which also includes its modern derivatives such as morphine and heroin. To quote the United Nations Office on Drugs and Crime (formerly the UN Drug Control Program), “sought-after effects” include a “sense of well being by reducing tension, anxiety and depression; euphoria, in large doses warmth, contentment, relaxed detachment from emotional as well as physical distress,” and “relief from pain (analgesia).”1 “Long-term effects” include, among a host of other things, “rapid development of tolerance and physical and psychological dependence” and, in the case of “abrupt withdrawal,” “moderate to severe withdrawal syndrome which is generally comparable to a bout of influenza (with cramps, diarrhea, running nose, tremors, panic, chills and sweating, etc.).”2 Current research has begun to show how opium affects the human brain through neural pathways, and how the addict’s brain is different from that of the non-addict.

Key production centers of raw opium in the nineteenth and early twentieth centuries included China, India, the Levant (Eastern Mediterranean), and Persia. While Chinese-grown opium was used entirely for domestic consumption, raw opium from the other production centers was often exported to feed the growing worldwide demand for the drug during this period. By the early twentieth century, in some colonies, the processing of this raw opium had been taken over by the state. The Dutch, for example, invested in a state-of-the-art opium-processing factory in Batavia (now Jakarta) in the Netherlands East Indies (now Indonesia). Similarly, the British set up their own opium factories in India. Interestingly, the British colonial facilities (at Neemuch and Ghazipur in India) are still being used to produce opium, which is now legally exported to the United States and other countries for medicinal purposes; the morphine derived from this opium is used worldwide as a painkiller of last resort in patients, especially those who are terminally ill.

Opium and its derivatives have been and continue to be consumed in many forms. Historically, opium was eaten, drunk, or smoked. At present, in addition to these means of consumption, opium derivatives can also be injected, as in the case of heroin. Opium was also often mixed with other ingredients to create popular products. For example, tobacco was mixed with the drug to make madak, one of the most widely used forms of opium in parts of late-nineteenth century India.

Why Is Opium of Special Interest to Economic Historians?

What differentiates opium from other tradable commodities, such as rubber and sugar, for example, is its highly addictive nature. Because of the physical and psychological dependence that it is capable of creating in significant numbers of its users, as a commodity, opium possesses a potential for economic gain (especially for producers) and loss (especially for consumers) that surpasses the potential of most commodities.

At least three broad themes dominate research on the economic history of opium. The first is the repeated use of opium in the accumulation of power and wealth, especially at the state level. The second is the clash between economic and ethical interests in determining the role of opium in society. The third is the (in)effectiveness of different regimes and drug control strategies in reducing the negative health and social consequences of widespread opium consumption, with its implications for the present-day management of the consumption of addictive substances in general and opium and its derivatives in particular.

Opium as an Instrument of State Power

While there are many examples of the use of opium as an instrument of state power, perhaps the two most well-known examples are the role of opium in trade relations, and the use of opium as a source of revenue for the state. Between 1856 and 1860, Britain fought China (in the Second Opium War) over the right to trade with China. The British victory ensured that European powers would have continued access to the Chinese market for opium. More importantly for Britain, the victory ensured that it would continue to sell the one good in China that had the potential to reduce or even eliminate its burgeoning trade deficit with China. In exchange for tea, silk, and other non- (or less-) addictive commodities, China would receive opium. A consequence of the Second Opium War was the gradual but significant increase in the prevalence of opium consumption in China. Not coincidentally, the British trade deficit with China also fell.

Across nineteenth and early twentieth century Asia, the use of opium to generate excise revenues for states, and especially colonial powers, gradually became standard practice. Britain (in India and Malaya, for example), the Netherlands (in the Netherlands East Indies), Japan (in Taiwan), and France (in French Indochina) all used different forms of state intervention to ensure that a portion of the sizeable proceeds from the sale of opium ended up in state coffers. In some cases, the revenue accounted for well over ten percent of all state revenues. Table 1 shows the contribution of revenues from opium to the Netherlands Indies budget over the period 1914-1940. Because of the low cost of production of opium, for every Guilder of cost that the state incurred, it made close to four Guilders in profit!

Table 1
Contribution of the Opium Regie to the Government Budget in the Netherlands Indies

Year Opium
Revenue
Total
Revenue
Opium %
of Total
Opium
Profits
Profit as
% of Opium
Revenue
1914 35.0 281.7 13.5 26.7 76
1915 32.6 309.7 11.2 25.2 77
1916 35.3 343.1 10.8 28.4 80
1917 38.2 360.1 11.4 30.4 80
1918 38.8 399.7 10.2 30.1 78
1919 42.5 543.1 8.2 33.2 78
1920 53.6 756.4 7.5 41.6 78
1921 53.3 791.8 7.1 42.1 79
1922 44.2 752.6 6.2 34.5 78
1923 37.6 650.4 6.1 30.1 80
1924 35.3 717.9 5.1 28.1 80
1925 36.6 753.8 5.2 28.7 78
1926 37.7 807.9 5.2 29.1 77
1927 40.6 779.1 5.7 31.4 77
1928 42.8 835.9 5.7 34.6 81
1929 40.9 848.5 5.3 32.7 80
1930 34.5 755.6 5.3 27.1 79
1931 25.3 652.0 4.6 19.0 75
1932 17.3 501.8 4.5 12.3 71
1933 12.7 460.6 3.7 8.6 68
1934 11.1 455.2 3.2 7.2* 65*
1935 9.5 466.7 2.6 6.1* 64*
1936 8.9 537.8 2.2 5.7* 64*
1937 11.5 575.4 2.5 7.7* 67*
1938 11.9 597.1 2.6 8.0* 67*
1939 11.5 663.4 1.7 8.6* 75*
1940 11.7 N.A N.A 8.5* 72*

Sources: For opium, the source data are Dutch East Indies Opiumregie, Verslag betreffende den Dienst der Opiumregie (Batavia: Landsdrukkerij, 1915-1933) and Dutch East Indies Opium- en Zoutregie, Verslag betreffende de Opium- en Zoutregie en de Zoutwinning (Batavia: Landsdrukkerij, 1934-1940). For total revenue, the source data are P. Creutzberg, Changing Economy in Indonesia: A Selection of Statistical Source Material from the Early Nineteenth Century up to 1940. Volume 2: Public Finance 1816-1939, (The Hague: Martinus Nijhoff, 1976), p.43-44. The latter source contains data only until 1939. This table was also published in Chandra (2000), p.104.

In millions of current (i.e., not adjusted for inflation) Guilders.

*These figures are derived from the combined accounts of the Opium and Salt Regie. They were computed by subtracting from opium revenue all elements of cost which were totally or partially attributable to the opium section of the Opium and Salt Regie. The numbers, therefore, underestimate the profitability of opium.

Economic vs. Ethical Interests

Because of its addictive nature and the relative insensitivity (inelasticity) of consumption to small or moderate fluctuations in its price, opium was an especially reliable source of revenue for governments in general and colonial governments in Asia in particular. The fact that it was addictive, and in many cases strongly so, however, raised ethical questions about its suitability as a target of taxation and, more broadly, as a legal commodity. Framed simplistically, the ethical questions were (i) should a state rely for revenue on the sale of a good that is demonstrably causing the physical and economic ruin of some of its subjects? and (ii) even if the state is not a direct financial beneficiary of the sale of opium, should it permit the use of such a substance by its subjects? By the early twentieth century, these ethical concerns had led to the development of a widespread anti-opium movement in Europe. This clash of ethical and economic interests led to lively debates both in Europe and in Asia, and a number of states moved to accommodate (or at least ostensibly accommodate) the ethical interests by instituting changes in opium regimes.

Opium Regimes

There is widespread agreement among historians that opium consumption increased worldwide (and in most cases at the country-level as well) in the second half of the nineteenth century. As the ethical debate intensified in the early twentieth century, however, states voluntarily instituted changes in their opium regimes ostensibly aimed at reducing the opium problem in their colonies and at home. The Netherlands, for example, moved to take complete control of the manufacture and sale of opium across the Netherlands Indies. The Opium Regie, as the system came to be called, was modeled on the French system in French Indochina. Whether there was ever any intention to reduce the drug problem in the Netherlands Indies is questionable. Clearly, in the first decade in which the Regie was in operation, opium sales increased substantially, netting the government enormous profits.

Most statistics do, however, show a marked decline (albeit in many cases not a steady one) in opium consumption between 1900 and 1936. These statistics are used to argue that (i) the regime changes that were instituted were actually intended to reduce opium consumption and (ii) the regime changes were successful in combating the opium problem. In fact, the use of 1936 as the reference year is rather unfortunate. The effects of the Great Depression, which began in 1929, were being felt as late as 1936, especially in the trade-oriented economies of Asia. Because of the precipitous drop in incomes during the Depression and the inflexibility of official opium prices in many economies, the ability of opium consumers to purchase legal opium fell drastically, contributing to a precipitous drop in the consumption of legal opium. Figure 1 demonstrates this phenomenon in the Netherlands East Indies. To the extent that this drop in legal opium consumption was not countered by increases in the consumption of contraband opium (which is not measured), the Great Depression deserves far more credit than it has received to date for the decline in opium consumption between 1900 and 1936.

Conclusion

The addictive nature of opium makes it a particularly interesting candidate for study by economic historians. The three broad areas of interest discussed above are of direct or indirect relevance to contemporary problems. In the area of drugs and state power, after opium and heroin were banned in the first half of the twentieth century, in a number of instances, criminal syndicates took the trade over from the states that had once controlled it. Like their predecessors, they have since used it to accumulate vast fortunes and power. In some cases, they have even come to pose a credible threat to the states themselves. In the area of ethics vs. economics, debates continue to rage over government intervention in markets for intoxicating or addictive substances and activities, including hard drugs, marijuana, tobacco, alcohol, and gambling. Should these activities be legal? Should the government tax them, and if so, how heavily? In the area of regimes, what regimes are likely to yield optimal outcomes for the management of addictive substances and activities?

Just as the issues illuminated by the study of the economic history of opium are hotly debated, so too does the consumption of opium and its derivatives continue unabated to this day. And, because accurate information about opium is extremely difficult to come by in the present regime because the drug is illegal, historical data dating back to a time when opium was consumed legally and openly is of particular importance in the debates surrounding the history and management of the problem of addiction.

References

Brook, T. and B.T. Wakabayashi, editors. Opium Regimes: China, Britain, and Japan, 1839-1952. Berkeley: University of California Press, 2000.

Chandra, S. “What the Numbers Really Tell Us about the Decline of the Opium Regie.” Indonesia 70 (2000):101-23.

Chandra, S. “The Role of Government Policy in Increasing Drug Use: Java, 1875-1914.” Journal of Economic History 62 (2002): 1116-21.

Courtwright, D.T. Dark Paradise: Opium Addiction in America before 1940. Cambridge, MA: Harvard University Press, 1982.

Crothers, T.D. “Some New Studies of the Opium Disease.” Journal of the Amer­ican Medical Association XVIII (1892): 227-33.

Dick, H., 1993. “Oei Tiong Ham.” In The Rise and Fall of Revenue Farming: Business Elites and the Emergence of the Modern State in Southeast Asia, edited by J. Butcher and H. Dick, 272-80. New York: St. Martin’s Press, 1993.

Fauci, A.S., E. Braunwald, K.J. Isselbacher, and J.B. Martin, editors. Harrison’s Principles of Internal Medicine: Companion Handbook, fourteenth edition. New York: McGraw Hill, 1998.

Foster, A.L. “Prohibition as Superiority: Policing Opium in South-East Asia, 1898-1925.” International History Review 22 (2000): 253-73.

Hamilton, M. “Opioid FAQ.” 1994. http://leda.lycaeum.org/?ID=11312, as viewed in May 2004.

Liu, J.L., J.T. Liu, J.L. Hammitt, and S.Y. Chou. “The price elasticity of opium in Taiwan, 1914-1942,” Journal of Health Economics 18 (1999): 795-810.

McCoy, A. The Politics of Heroin. New York: Lawrence Hill Books, 2003.

Moyers, B. “Moyers on Addiction. Science: The Hijacked Brain.” PBS Online and WNET/thirteen, 1998. http://www.thirteen.org/closetohome/science/. Accessed on May 4, 2004.

Reader’s Digest. Prescription and Over-the-Counter Drugs. Pleasantville, NY: Reader’s Digest, 1998.

Rush, J.R. “Social Control and Influence in Nineteenth Century Indonesia: Opium Farms and the Chinese of Java.” Indonesia 35 (1983): 53-64.

Rush, J.R. “Opium in Java: A Sinister Friend.” Journal of Asian Studies 44 (1985): 549-62.

Rush, J.R., Opium to Java: Revenue Farming and Chinese Enterprise in Colonial Indonesia, 1860-1910. Ithaca, NY: Cornell University Press, 1990.

Trocki, C.A. Opium and Empire: Chinese Society in Colonial Singapore, 1800­-1910. Ithaca, NY: Cornell University Press, 1990.

United Nations Office on Drugs and Crime (UNODC). http://www.unodc.org (accessed May 4, 2004).

van Luijk, E.W., and J.C. van Ours. “The Effects of Government Policy on Opium Consumption: Java, 1875-1904.” Journal of Economic History 61 (2001): 1-18.

van Luijk, E.W., and J.C. van Ours. “The Effects of Government Policy on Drug Use Reconsidered.” Journal of Economic History 62 (2002): 1122-25.

van Ours, J.C. “The Price Elasticity of Hard Drugs: The Case of Opium in the Dutch East Indies, 1923-1938.” Journal of Political Economy 103 (1995): 261-79.


1 For a brief but informative description of opium and opiates, see the United Nations Office on Drugs and Crime at http://www.unodc.org (accessed May 4, 2004) and especially the files on opium under the link “Drug Abuse and Demand Reduction.”

2 UNODC, http://www.unodc.org/unodc/en/report_1998-10-01_1_page014.html (Accessed May 4, 2004). See also Crothers (1892) for medical accounts of the “opium disease.”

Preparation of this piece was assisted by grants from the Robert Wood Johnson Foundation Substance Abuse Policy Research Program and the National Institute on Drug Abuse (NIDA BSTART grant 1R03DA014322), National Institutes of Health.

Citation: Chandra, Siddharth. “Economic Histories of the Opium Trade”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/economic-histories-of-the-opium-trade/

The National Recovery Administration

Barbara Alexander, Charles River Associates

This article outlines the history of the National Recovery Administration, one of the most important and controversial agencies in Roosevelt’s New Deal. It discusses the agency’s “codes of fair competition” under which antitrust law exemptions could be granted in exchange for adoption of minimum wages, problems some industries encountered in their subsequent attempts to fix prices under the codes, and the macroeconomic effects of the program.

The early New Deal suspension of antitrust law under the National Recovery Administration (NRA) is surely one of the oddest episodes in American economic history. In its two-year life, the NRA oversaw the development of so-called “codes of fair competition” covering the larger part of the business landscape.1 The NRA generally is thought to have represented a political exchange whereby business gave up some of its rights over employees in exchange for permission to form cartels.2 Typically, labor is taken to have gotten the better part of the bargain; the union movement having extended its new powers after the Supreme Court abolished the NRA in 1935, while the business community faced a newly aggressive FTC by the end of the 1930s. While this characterization may be true in broad outline, close examination of the NRA reveals that matters may be somewhat more complicated than is suggested by the interpretation of the program as a win for labor contrasted with a missed opportunity for business.

Recent evaluations of the NRA have wended their way back to themes sounded during the early nineteen thirties, in particular, interrelationships between the so-called “trade practice” or cartelization provisions of the program and the grant of enhanced bargaining power to trade unions.3 On the microeconomic side, allowing unions to bargain for industry-wide wages may have facilitated cartelization in some industries. Meanwhile, macroeconomists have suggested that the Act and its progeny, especially labor measures such as the National Labor Relations Act may bear more responsibility for the length and severity of the Great Depression than has been recognized heretofore.4 If this thesis holds up to closer scrutiny, the era may come to be seen as a primary example of the potential macroeconomic costs of shifts in political and economic power.

Kickoff Campaign and Blanket Codes

The NRA began operations in a burst of “ballyhoo” during the summer of 1933.5 The agency was formed upon passage of the National Industrial Recovery Act (NIRA) in mid-June. A kick-off campaign of parades and press events succeeded in getting over 2 million employers to sign a preliminary “blanket code” known as the “President’s Re-Employment Agreement.” Signatories of the PRA pledged to pay minimum wages ranging from around $12 to $15 per 40-hour week, depending on size of town. Some 16 million workers were covered, out of a non-farm labor force of some 25 million. “Share-the-work” provisions called for limits of 35 to 40 hours per week for most employees.6

NRA Codes

Over the next year and a half, the blanket code was superseded by over 500 codes negotiated for individual industries. The NIRA provided that: “Upon the application to the President by one or more trade or industrial associations or groups, the President may approve a code or codes of fair competition for the trade or industry.”7 The carrot held out to induce participation was enticing: “any code … and any action complying with the provisions thereof . . . shall be exempt from the provisions of the antitrust laws of the United States.”8 Representatives of trade associations overran Washington, and by the time the NRA was abolished, hundreds of codes covering over three-quarters of private, non-farm employment had been approved.9 Code signatories were supposed to be allowed to use the NRA “Blue Eagle” as a symbol that “we do our part” only as long as they remained in compliance with code provisions.10

Disputes Arise

Almost 80 percent of the codes had provisions that were directed at establishment of price floors.11 The Act did not specifically authorize businesses to fix prices, and indeed it specified that ” . . .codes are not designed to promote monopolies.”12 However, it is an understatement to say that there was never any consensus among firms, industries and NRA officials as to precisely what was to be allowed as part of an acceptable code. Arguments about exactly what the NIRA allowed, and how the NRA should implement the Act began during its drafting and continued unabated throughout its life. The arguments extended from the level of general principles to the smallest details of policy, unsurprising given the complete dependence of appropriate regulatory design on precise regulatory objectives, which here were embroiled in dispute from start to finish.

To choose just one out of many examples of such disputes: There was a debate within the NRA as to whether “code authorities” (industry governing bodies) should be allowed to use industry-wide or “representative” cost data to define a price floor based on “lowest reasonable cost.” Most economists would understand this type of rule as a device that would facilitate monopoly pricing. However, a charitable interpretation of the views of administration proponents is that they had some sort of “soft competition” in mind. That is, they wished to develop and allow the use of mechanisms that would extend to more fragmented industries a type of peaceful coexistence more commonly associated with oligopoly. Those NRA supporters of the representative-cost-based price floor imagined that a range of prices would emerge if such a floor were to be set, whereas detractors believed that “the minimum would become the maximum,” that is, the floor would simply be a cartel price, constraining competition across all firms in an industry.13

Price Floors

While a rule allowing emergency price floors based on “lowest reasonable cost” was eventually approved, there was no coherent NRA program behind it.14 Indeed, the NRA and code authorities often operated at cross-purposes. At the same time that some officials of the NRA arguably took actions to promote softened competition, some in industry tried to implement measures more likely to support hard-core cartels, even when they thereby reduced the chance of soft competition should collusion fail. For example, with the partial support of the NRA, many code authorities moved to standardize products, shutting off product differentiation as an arena of potential rivalry, in spite of its role as one of the strongest mechanisms that might soften price competition.15 Of course if one is looking to run a naked price-fixing scheme, it is helpful to eliminate product differentiation as an avenue for cost-raising, profit-eroding rivalry. An industry push for standardization can thus be seen as a way of supporting hard-core cartelization, while less enthusiasm on the part of some administration officials may have reflected an understanding, however intuitive, that socially more desirable soft competition required that avenues for product differentiation be left open.

National Recovery Review Board

According to some critical observers then and later, the codes did lead to an unsurprising sort of “golden age” of cartelization. The National Recovery Review Board, led by an outraged Clarence Darrow (of Scopes “monkey trial” fame) concluded in May of 1934 that “in certain industries monopolistic practices existed.”16 While there are legitimate examples of every variety of cartelization occurring under the NRA, many contemporaneous and subsequent assessments of Darrow’s work dismiss the Board’s “analysis” as hopelessly biased. Thus although its conclusions are interesting as a matter of political economy, it is far from clear that the Board carried out any dispassionate inventory of conditions across industries, much less a real weighing of evidence.17

Compliance Crisis

In contrast to Darrow’s perspective, other commentators focus on the “compliance crisis” that erupted within a few months of passage of the NIRA.18 Many industries were faced with “chiselers” who refused to respect code pricing rules. Firms that attempted to uphold code prices in the face of defection lost both market share and respect for the NRA.

NRA state compliance offices had recorded over 30,000 “trade practice” complaints by early 1935.19 However, the compliance program was characterized by “a marked timidity on the part of NRA enforcement officials.”20 This timidity was fatal to the program, since monopoly pricing can easily be more damaging than is the most bare-knuckled competition to a firm that attempts it without parallel action from its competitors. NRA hesitancy came about as a result of doubts about whether a vigorous enforcement effort would withstand constitutional challenge, a not-unrelated lack of support from the Department of Justice, public antipathy for enforcement actions aimed at forcing sellers to charge higher prices, and unabating internal NRA disputes about the advisability of the price-fixing core of the trade practice program.21 Consequently, by mid-1934, firms disinclined to respect code pricing rules were ignoring them. By that point then, contrary to the initial expectations of many code signatories, the new antitrust regime represented only permission to form voluntary cartelization agreements, not the advent of government-enforced cartels. Even there, participants had to be discreet, so as not to run afoul of the antimonopoly language of the Act.

It is still far from clear how much market power was conferred by the NRA’s loosening of antitrust constraints. Of course, modern observers of the alternating successes and failures of cartels such as OPEC will not be surprised that the NRA program led to mixed results. In the absence of government enforcement, the program simply amounted to de facto legalization of self-enforcing cartels. With respect to the ease of collusion, economic theory is clear only on the point that self-enforceability is an open question; self-interest may lead to either breakdown of agreements or success at sustaining them.

Conflicts between Large and Small Firms

Some part of the difficulties encountered by NRA cartels may have had roots in a progressive mandate to offer special protection to the “little guy.” The NIRA had specified that acceptable codes of fair competition must not “eliminate or oppress small enterprises,”22 and that “any organization availing itself of the benefits of this title shall be truly representative of the trade or industry . . . Any organization violating … shall cease to be entitled to the benefits of this title.”23 Majority rule provisions were exceedingly common in codes, and were most likely a reflection of this statutory mandate. The concern for small enterprise had strong progressive roots.24 Justice Brandeis’s well-known antipathy for large-scale enterprise and concentration of economic power reflected a widespread and long-standing debate about the legitimate goals of the American experiment.

In addition to evaluating monopolization under the codes, the Darrow board had been charged with assessing the impact of the NRA on small business. Its conclusion was that “in certain industries small enterprises were oppressed.” Again however, as with his review of monopolization, Darrow may have seen only what he was predisposed to see. A number of NRA “code histories” detail conflicts within industries in which small, higher-cost producers sought to use majority rule provisions to support pricing at levels above those desired by larger, lower-cost producers. In the absence of effective enforcement from the government, such prices were doomed to break down, triggering repeated price wars in some industries.25

By 1935, there was understandable bitterness about what many businesses viewed as the lost promise of the NRA. Undoubtedly, the bitterness was exacerbated by the fact that the NRA wanted higher wages while failing to deliver the tools needed for effective cartelization. However, it is not entirely clear that everyone in the business community felt that the labor provisions of the Act were undesirable.26

Labor and Employment Issues

By their nature, market economies give rise to surplus-eroding rivalry among those who would be better off collectively if they could only act in concert. NRA codes of fair competition, specifying agreements on pricing and terms of employment, arose from a perceived confluence of interests among representatives of “business,” “labor,” and “the public” in muting that rivalry. Many proponents of the NIRA held that competitive pressures on business had led to downward pressure on wages, which in turn caused low consumption, leading to greater pressure on business, and so on. Allowing workers to organize and bargain collectively, while their employers pledged to one another not to sell below cost, was identified as a way to arrest harmful deflationary forces. Knowledge that one’s rivals would also be forced to pay “code wages” had some potential for aiding cartel survival. Thus the rationale for NRA wage supports at the microeconomic level potentially dovetailed with the macroeconomic theory by which higher wages were held to support higher consumption and, in turn, higher prices.

Labor provisions of the NIRA appeared in Section 7: “. . . employees shall have the right to organize and bargain collectively through representatives of their own choosing … employers shall comply with the maximum hours of labor, minimum rates of pay, and other conditions of employment…” 27 Each “code of fair competition” had to include labor provisions acceptable to the National Recovery Administration, developed during a process of negotiations, hearings, and review. Thus in order to obtain the shield against antitrust prosecution for their “trade practices” offered by an approved code, significant concessions to workers had to be made.

The NRA is generally judged to have been a success for labor and a miserable failure for business. However, evaluation is complicated to the extent that labor could not have achieved gains with respect to collective bargaining rights over wages and working conditions, had those rights not been more or less willingly granted by employers operating under the belief that stabilization of labor costs would facilitate cartelization. The labor provisions may have indeed helped some industries as well as helping workers, and for firms in such industries, the NRA cannot have been judged a failure. Moreover, while some businesses may have found the Act beneficial, because labor cost stability or freedom to negotiate with rivals enhanced their ability to cooperate on price, it is not entirely obvious that workers as a class gained as much as is sometimes contended.

The NRA did help solidify new and important norms regarding child labor, maximum hours, and other conditions of employment; it will never be known if the same progress could have been made had not industry been more or less hornswoggled into giving ground, using the antitrust laws as bait. Whatever the long-term effects of the NRA on worker welfare, the short-term gains for labor associated with higher wages were questionable. While those workers who managed to stay employed throughout the nineteen thirties benefited from higher wages, to the extent that workers were also consumers, and often unemployed consumers at that, or even potential entrepreneurs, they may have been better off without the NRA.

The issue is far from settled. Ben Bernanke and Martin Parkinson examine the economic growth that occurred during the New Deal in spite of higher wages and suggest “part of the answer may be that the higher wages ‘paid for themselves’ through increased productivity of labor. Probably more important, though, is the observation that with imperfectly competitive product markets, output depends on aggregate demand as well as the real wage. Maybe Herbert Hoover and Henry Ford were right: Higher real wages may have paid for themselves in the broader sense that their positive effect on aggregate demand compensated for their tendency to raise cost.”28 However, Christina Romer establishes a close connection between NRA programs and the failure of wages and prices to adjust to high unemployment levels. In her view, “By preventing the large negative deviations of output from trend in the mid-1930s from exerting deflationary pressure, [the NRA] prevented the economy’s self-correction mechanism from working.” 29

Aftermath of Supreme Court’s Ruling in Schecter Case

The Supreme Court struck down the NRA on May 27, 1935; the case was a dispute over violations of labor provisions of the “Live Poultry Code” allegedly perpetrated by the Schecter Poultry Corporation. The Court held the code to be invalid on grounds of “attempted delegation of legislative power and the attempted regulation of intrastate transactions which affect interstate commerce only indirectly.”30 There were to be no more grand bargains between business and labor under the New Deal.

Riven by divergent agendas rooted in industry- and firm-specific technology and demand, “business” was never able to speak with even the tenuous degree of unity achieved by workers. Following the abortive attempt to get the government to enforce cartels, firms and industries went their own ways, using a variety of strategies to enhance their situations. A number of sectors did succeed in getting passage of “little NRAs” with mechanisms tailored to mute competition in their particular circumstances. These mechanisms included the Robinson-Patman Act, aimed at strengthening traditional retailers against the ability of chain stores to buy at lower prices, the Guffey Acts, in which high cost bituminous coal operators and coal miners sought protection from the competition of lower cost operators, and the Motor Carrier Act in which high cost incumbent truckers obtained protection against new entrants.31

On-going macroeconomic analysis suggests that the general public interest may have been poorly served by the experiment of the NRA. Like many macroeconomic theories, the validity of the underconsumption scenario that was put forth in support of the program depended on the strength and timing of the operation of its various mechanisms. Increasingly it appears that the NRA set off inflationary forces thought by some to be desirable at the time, but that in fact had depressing effects on demand for labor and on output. Pure monopolistic deadweight losses probably were less important than higher wage costs (although there has not been any close examination of inefficiencies that may have resulted from the NRA’s attempt to protect small higher-cost producers). The strength of any mitigating effects on aggregate demand remains to be established.

1 Leverett Lyon, P. Homan, L. Lorwin, G. Terborgh, C. Dearing, L. Marshall, The National Recovery Administration: An Analysis and Appraisal, Washington: Brooking Institution, 1935, p. 313, footnote 9.

2 See, for example, Charles Frederick Roos, NRA Economic Planning, Colorado Springs: Cowles Commission, 1935, p. 343.

3See, for example, Colin Gordon, New Deals: Business, Labor, and Politics in America, 1920-1935, New York: Cambridge University Press, 1993, especially chapter 5.

4Christina D. Romer, “Why Did Prices Rise in the 1930s?” Journal of Economic History 59, no. 1 (1999): 167-199; Michael Weinstein, Recovery and Redistribution under the NIRA, Amsterdam: North Holland, 1980, and Harold L. Cole and Lee E. Ohanian, “New Deal Policies and the Persistence of the Great Depression,” Working Paper 597, Federal Reserve Bank of Minneapolis, February 2001. But also see “Unemployment, Inflation and Wages in the American Depression: Are There Lessons for Europe?” Ben Bernanke and Martin Parkinson, American Economic Review: Papers and Proceedings 79, no. 2 (1989): 210-214.

5 See, for example, Donald Brand, Corporatism and the Rule of Law: A Study of the National Recovery Administration, Ithaca: Cornell University Press, 1988, p. 94.

6 See, for example, Roos, op. cit., pp. 77, 92.

7 Section 3(a) of The National Industrial Recovery Act, reprinted at p. 478 of Roos, op. Cit.

8 Section 5 of The National Industrial Recovery Act, reprinted at p. 483 of Roos, op. cit. Note though, that the legal status of actions taken during the NRA era was never clear; Roos points out that “…President Roosevelt signed an executive order on January 20, 1934, providing that any complainant of monopolistic practices … could press it before the Federal Trade Commission or request the assistance of the Department of Justice. And, on the same date, Donald Richberg issued a supplementary statement which said that the provisions of the anti-trust laws were still in effect and that the NRA would not tolerate monopolistic practices.” (Roos, op. cit. p. 376.)

9 Lyon, op. cit., p. 307, cited at p. 52 in Lee and Ohanian, op cit.

10 Roos, op. cit., p. 75; and Blackwell Smith, My Imprint on the Sands of Time: The Life of a New Dealer, Vantage Press, New York, p. 109.

11 Lyon, op. cit., p. 570.

12 Section 3 (a)(2) of The National Industrial Recovery Act, op. Cit.

13 Roos, op. cit., at pp. 254-259. Charles Roos comments that “Leon Henderson and Blackwell Smith, in particular, became intrigued with a notion that competition could be set up within limits and that in this way wide price variations tending to demoralize an industry could be prevented.”

14 Lyon, et al., op. cit., p. 605.

15 Smith, Assistant Counsel of the NRA (per Roos, op cit., p. 254), has the following to say about standardization: One of the more controversial subjects, which we didn’t get into too deeply, except to draw guidelines, was standardization.” Smith goes on to discuss the obvious need to standardize rail track gauges, plumbing fittings, and the like, but concludes, “Industry on the whole wanted more standardization than we could go with.” (Blackwell Smith, op. cit., pp. 106-7.) One must not go overboard looking for coherence among the various positions espoused by NRA administrators; along these lines it is worth remembering Smith’s statement some 60 years later: “Business’s reaction to my policy [Smith was speaking generally here of his collective proposals] to some extent was hostile. They wished that the codes were not as strict as I wanted them to be. Also, there was criticism from the liberal/labor side to the effect that the codes were more in favor of business than they should have been. I said, ‘We are guided by a squealometer. We tune policy until the squeals are the same pitch from both sides.'” (Smith, op. cit. p. 108.)

16 Quoted at p 378 of Roos, op. Cit.

17 Brand, op. cit. at pp. 159-60 cites in agreement extremely critical conclusions by Roos (op. cit. at p. 409) and Arthur Schlesinger, The Age of Roosevelt: The Coming of the New Deal, Boston: Houghton Mifflin, 1959, p. 133.

18 Roos acknowledges a breakdown by spring of 1934: “By March, 1934 something was urgently needed to encourage industry to observe code provisions; business support for the NRA had decreased materially and serious compliance difficulties had arisen.” (Roos, op. cit., at p. 318.) Brand dates the start of the compliance crisis much earlier, in the fall of 1933. (Brand, op. cit., p. 103.)

19 Lyon, op. cit., p. 264.

20 Lyon, op. cit., p. 268.

21 Lyon, op. cit., pp. 268-272. See also Peter H. Irons, The New Deal Lawyers, Princeton: Princeton University Press, 1982.

22 Section 3(a)(2) of The National Industrial Recovery Act, op. Cit.

23 Section 6(b) of The National Industrial Recovery Act, op. Cit.

24 Brand, op. Cit.

25 Barbara Alexander and Gary D. Libecap, “The Effect of Cost Heterogeneity in the Success and Failure of the New Deal’s Agricultural and Industrial Programs,” Explorations in Economic History, 37 (2000), pp. 370-400.

26 Gordon, op. Cit.

27 Section 7 of the National Industrial Recovery Act, reprinted at pp. 484-5 of Roos, op. Cit.

28 Bernanke and Parkinson, op. cit., p. 214.

29 Romer, op. cit., p. 197.

30 Supreme Court of the United States, Nos. 854 and 864, October term, 1934, (decision issued May 27, 1935). Reprinted in Roos, op. cit., p. 580.

31 Ellis W. Hawley, The New Deal and the Problem of Monopoly: A Study in Economic Ambivalence, 1966, Princeton: Princeton University Press, p.

Citation: Alexander, Barbara. “National Recovery Administration”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-national-recovery-administration/

The Economic History of Norway

Ola Honningdal Grytten, Norwegian School of Economics and Business Administration

Overview

Norway, with its population of 4.6 million on the northern flank of Europe, is today one of the most wealthy nations in the world, both measured as GDP per capita and in capital stock. On the United Nation Human Development Index, Norway has been among the three top countries for several years, and in some years the very top nation. Huge stocks of natural resources combined with a skilled labor force and the adoption of new technology made Norway a prosperous country during the nineteenth and twentieth century.

Table 1 shows rates of growth in the Norwegian economy from 1830 to the present using inflation-adjusted gross domestic product (GDP). This article splits the economic history of Norway into two major phases — before and after the nation gained its independence in 1814.

Table 1
Phases of Growth in the Real Gross Domestic Product of Norway, 1830-2003

(annual growth rates as percentages)

Year GDP GDP per capita
1830-1843 1.91 0.86
1843-1875 2.68 1.59
1875-1914 2.02 1.21
1914-1945 2.28 1.55
1945-1973 4.73 3.81
1973-2003 3.28 2.79
1830-2003 2.83 2.00

Source: Grytten (2004b)

Before Independence

The Norwegian economy was traditionally based on local farming communities combined with other types of industry, basically fishing, hunting, wood and timber along with a domestic and international-trading merchant fleet. Due to topography and climatic conditions the communities in the North and the West were more dependent on fish and foreign trade than the communities in the south and east, which relied mainly on agriculture. Agricultural output, fish catches and wars were decisive for the waves in the economy previous to independence. This is reflected in Figure 1, which reports a consumer price index for Norway from 1516 to present.

The peaks in this figure mark the sixteenth-century Price Revolution (1530s to 1590s), the Thirty Years War (1618-1648), the Great Nordic War (1700-1721), the Napoleonic Wars (1800-1815), the only period of hyperinflation in Norway — World War I (1914-1918) — and the stagflation period, i.e. high rates of inflation combined with a slowdown in production, in the 1970s and early 1980s.

Figure 1
Consumer Price Index for Norway, 1516-2003 (1850 = 100).

Figure 1
Source: Grytten (2004a)

During the last decades of the eighteenth century the Norwegian economy bloomed along with a first era of liberalism. Foreign trade of fish and timber had already been important for the Norwegian economy for centuries, and now the merchant fleet was growing rapidly. Bergen, located at the west coast, was the major city, with a Hanseatic office and one of the Nordic countries’ largest ports for domestic and foreign trade.

When Norway gained its independence from Denmark in 1814, after a tight union covering 417 years, it was a typical egalitarian country with a high degree of self-supply from agriculture, fisheries and hunting. According to the population censuses from 1801 and 1815 more than ninety percent of the population of 0.9 million lived in rural areas, mostly on small farms.

After Independence (1814)

Figure 2 shows annual development in GDP by expenditure (in fixed 2000 prices) from 1830 to 2003. The series, with few exceptions, reveal steady growth rates with few huge fluctuations. However, economic growth as a more or less continuous process started in the 1840s. We can also conclude that the growth process slowed down during the last three decades of the nineteenth century. The years 1914-1945 were more volatile than any other period in question, while there was an impressive and steady rate of growth until the mid 1970s and from then on slower growth.

Figure 2
Gross Domestic Product for Norway by Expenditure Category
(in 2000 Norwegian Kroner)

Figure 2
Source: Grytten (2004b)

Stagnation and Institution Building, 1814-1843

The newborn state lacked its own institutions, industrial entrepreneurs and domestic capital. However, due to its huge stocks of natural resources and its geographical closeness to the sea and to the United Kingdom, the new state, linked to Sweden in a loose royal union, seized its opportunities after some decades. By 1870 it had become a relatively wealthy nation. Measured in GDP per capita Norway was well over the European average, in the middle of the West European countries, and in fact, well above Sweden.

During the first decades after its independence from Denmark, the new state struggled with the international recession after the Napoleonic wars, deflationary monetary policy, and protectionism from the UK.

The Central Bank of Norway was founded in 1816, and a national currency, the spesidaler pegged to silver was introduced. The daler depreciated heavily during the first troubled years of recession in the 1820s.

The Great Boom, 1843-1875

After the Norwegian spesidaler gained its par value to silver in 1842, Norway saw a period of significant economic growth up to the mid 1870s. This impressive growth was mirrored in only a few other countries. The growth process was very much initiated by high productivity growth in agriculture and the success of the foreign sector. The adoption of new structures and technology along with substitution from arable to lifestock production made labor productivity in agriculture increase by about 150 percent between 1835 and 1910. The exports of timber, fish and in particular maritime services achieved high growth rates. In fact, Norway became a major power in shipping services during this period, accounting for about seven percent of the world merchant fleet in 1875. Norwegian sailing vessels freighted international goods all over the world at low prices.

The success of the Norwegian foreign sector can be explained by a number of factors. Liberalization of world trade and high international demand secured a market for Norwegian goods and services. In addition, Norway had vast stocks of fish and timber along with maritime skills. According to recent calculations, GDP per capita had an annual growth rate of 1.6 percent 1843 to 1876, well above the European average. At the same time the Norwegian annual rate of growth for exports was 4.8 percent. The first modern large-scale manufacturing industry in Norway saw daylight in the 1840s, when textile plants and mechanized industry were established. A second wave of industrialization took place in the 1860s and 1870s. Following the rapid productivity growth in agriculture, food processing and dairy production industries showed high growth in this period.

During this great boom, capital was imported mainly from Britain, but also from Sweden, Denmark and Germany, the four most important Norwegian trading partners at the time. In 1536 the King of Denmark and Norway chose the Lutheran faith as the state religion. In consequence of the Reformation, reading became compulsory; consequently Norway acquired a generally skilled and independent labor force. The constitution from 1814 also cleared the way for liberalism and democracy. The puritan revivals during the nineteenth century created a business environment, which raised entrepreneurship, domestic capital and a productive labor force. In the western and southern parts of the country these puritan movements are still strong, both in daily life and within business.

Relative Stagnation with Industrialization, 1875-1914

Norway’s economy was hit hard during the “depression” from mid 1870s to the early 1890s. GDP stagnated, particular during the 1880s, and prices fell until 1896. This stagnation is mirrored in the large-scale emigration from Norway to North America in the 1880s. At its peak in 1882 as many as 28,804 persons, 1.5 percent of the population, left the country. All in all, 250,000 emigrated in the period 1879-1893, equal to 60 percent of the birth surplus. Only Ireland had higher emigration rates than Norway between 1836 and 1930, when 860,000 Norwegians left the country.

The long slow down can largely been explained by Norway’s dependence on the international economy and in particular the United Kingdom, which experienced slower economic growth than the other major economies of the time. As a result of the international slowdown, Norwegian exports contracted in several years, but expanded in others. A second reason for the slowdown in Norway was the introduction of the international gold standard. Norway adopted gold in January 1874, and due to the trade deficit, lack of gold and lack of capital, the country experienced a huge contraction in gold reserves and in the money stock. The deflationary effect strangled the economy. Going onto the gold standard caused the appreciation of the Norwegian currency, the krone, as gold became relatively more expensive compared to silver. A third explanation of Norway’s economic problems in the 1880s is the transformation from sailing to steam vessels. Norway had by 1875 the fourth biggest merchant fleet in the world. However, due to lack of capital and technological skills, the transformation from sail to steam was slow. Norwegian ship owners found a niche in cheap second-hand sailing vessels. However, their market was diminishing, and finally, when the Norwegian steam fleet passed the size of the sailing fleet in 1907, Norway was no longer a major maritime power.

A short boom occurred from the early 1890s to 1899. Then, a crash in the Norwegian building industry led to a major financial crash and stagnation in GDP per capita from 1900 to 1905. Thus from the middle of the 1870s until 1905 Norway performed relatively bad. Measured in GDP per capita, Norway, like Britain, experienced a significant stagnation relative to most western economies.

After 1905, when Norway gained full independence from Sweden, a heavy wave of industrialization took place. In the 1890s the fish preserving and cellulose and paper industries started to grow rapidly. From 1905, when Norsk Hydro was established, manufacturing industry connected to hydroelectrical power took off. It is argued, quite convincingly, that if there was an industrial breakthrough in Norway, it must have taken place during the years 1905-1920. However, the primary sector, with its labor-intensive agriculture and increasingly more capital-intensive fisheries, was still the biggest sector.

Crises and Growth, 1914-1945

Officially Norway was neutral during World War I. However, in terms of the economy, the government clearly took the side of the British and their allies. Through several treaties Norway gave privileges to the allied powers, which protected the Norwegian merchant fleet. During the war’s first years, Norwegian ship owners profited from the war, and the economy boomed. From 1917, when Germany declared war against non-friendly vessels, Norway took heavy losses. A recession replaced the boom.

Norway suspended gold redemption in August 1914, and due to inflationary monetary policy during the war and in the first couple of years afterward, demand was very high. When the war came to an end this excess demand was met by a positive shift in supply. Thus, Norway, like other Western countries experienced a significant boom in the economy from the spring of 1919 to the early autumn 1920. The boom was followed by high inflation, trade deficits, currency depreciation and an overheated economy.

The international postwar recession beginning in autumn 1920, hit Norway more severely than most other countries. In 1921 GDP per capita fell by eleven percent, which was only exceeded by the United Kingdom. There are two major reasons for the devastating effect of the post-war recession. In the first place, as a small open economy, Norway was more sensitive to international recessions than most other countries. This was in particular the case because the recession hit the country’s most important trading partners, the United Kingdom and Sweden, so hard. Secondly, the combination of strong and mostly pro-cyclical inflationary monetary policy from 1914 to 1920 and thereafter a hard deflationary policy made the crisis worse (Figure 3).

Figure 3
Money Aggregates for Norway, 1910-1930

Figure 3
Source: Klovland (2004a)

In fact, Norway pursued a long, but non-persistent deflationary monetary policy aimed at restoring the par value of the krone (NOK) up to May 1928. In consequence, another recession hit the economy during the middle of the 1920s. Hence, Norway was one of the worst performers in the western world in the 1920s. This can best be seen in the number of bankruptcies, a huge financial crisis and mass unemployment. Bank losses amounted to seven percent of GDP in 1923. Total unemployment rose from about one percent in 1919 to more than eight percent in 1926 and 1927. In manufacturing it reached more than 18 percent the same years.

Despite a rapid boom and success within the whaling industry and shipping services, the country never saw a convincing recovery before the Great Depression hit Europe in late summer 1930. The worst year for Norway was 1931, when GDP per capita fell by 8.4 percent. This, however, was not only due to the international crisis, but also to a massive and violent labor conflict that year. According to the implicit GDP deflator prices fell more than 63 percent from 1920 to 1933.

All in all, however, the depression of the 1930s was milder and shorter in Norway than in most western countries. This was partly due to the deflationary monetary policy in the 1920s, which forced Norwegian companies to become more efficient in order to survive. However, it was probably more important that Norway left gold as early as September 27th, 1931 only a week after the United Kingdom. Those countries that left gold early, and thereby employed a more inflationary monetary policy, were the best performers in the 1930s. Among them were Norway and its most important trading partners, the United Kingdom and Sweden.

During the recovery period, Norway in particular saw growth in manufacturing output, exports and import substitution. This can to a large extent be explained by currency depreciation. Also, when the international merchant fleet contracted during the drop in international trade, the Norwegian fleet grew rapidly, as Norwegian ship owners were pioneers in the transformation from steam to diesel engines, tramp to line freights and into a new expanding niche: oil tankers.

The primary sector was still the largest in the economy during the interwar years. Both fisheries and agriculture struggled with overproduction problems, however. These were dealt with by introducing market controls and cartels, partly controlled by the industries themselves and partly by the government.

The business cycle reached its bottom in late 1932. Despite relatively rapid recovery and significant growth both in GDP and in employment, unemployment stayed high, and reached 10-11 percent on annual basis from 1931 to 1933 (Figure 4).

Figure 4
Unemployment Rate and Public Relief Work
as a Percent of the Work Force, 1919-1939

Figure 4
Source: Hodne and Grytten (2002)

The standard of living became poorer in the primary sector, among those employed in domestic services and for the underemployed and unemployed and their households. However, due to the strong deflation, which made consumer prices fall by than 50 percent from autumn 1920 to summer 1933, employees in manufacturing, construction and crafts experienced an increase in real wages. Unemployment stayed persistently high due to huge growth in labor supply, as result of immigration restrictions by North American countries from the 1920s onwards.

Denmark and Norway were both victims of a German surprise attack the 9th of April 1940. After two months of fighting, the allied troops surrendered in Norway on June 7th and the Norwegian royal family and government escaped to Britain.

From then until the end of the war there were two Norwegian economies, the domestic German-controlled and the foreign Norwegian- and Allied-controlled economy. The foreign economy was primarily established on the basis of the huge Norwegian merchant fleet, which again was among the biggest in the world accounting for more than seven percent of world total tonnage. Ninety percent of this floating capital escaped the Germans. The ships were united into one state-controlled company, NORTASHIP, which earned money to finance the foreign economy. The domestic economy, however, struggled with a significant fall in production, inflationary pressure and rationing of important goods, which three million Norwegians had to share with 400.000 Germans occupying the country.

Economic Planning and Growth, 1945-1973

After the war the challenge was to reconstruct the economy and re-establish political and economic order. The Labor Party, in office from 1935, grabbed the opportunity to establish a strict social democratic rule, with a growing public sector and widespread centralized economic planning. Norway first declined the U.S. proposition of financial aid after the world. However, due to lack of hard currencies they accepted the Marshall aid program. By receiving 400 million dollars from 1948 to 1952, Norway was one of the biggest per capita recipients.

As part of the reconstruction efforts Norway joined the Bretton Woods system, GATT, the IMF and the World Bank. Norway also chose to become member of NATO and the United Nations. In 1958 the country also joined the European Free Trade Area (EFTA). The same year Norway made the krone convertible to the U.S. dollar, as many other western countries did with their currencies.

The years from 1950 to 1973 are often called the golden era of the Norwegian economy. GDP per capita showed an annual growth rate of 3.3 percent. Foreign trade stepped up even more, unemployment barely existed and the inflation rate was stable. This has often been explained by the large public sector and good economic planning. The Nordic model, with its huge public sector, has been said to be a success in this period. If one takes a closer look into the situation, one will, nevertheless, find that the Norwegian growth rate in the period was lower than that for most western nations. The same is true for Sweden and Denmark. The Nordic model delivered social security and evenly-distributed wealth, but it did not necessarily give very high economic growth.

Figure 5
Public Sector as a Percent of GDP, 1900-1990

Figure 5
Source: Hodne and Grytten (2002)

Petroleum Economy and Neoliberalism, 1973 to the Present

After the Bretton Woods system fell apart (between August 1971 and March 1973) and the oil price shock in autumn 1973, most developed economies went into a period of prolonged recession and slow growth. In 1969 Philips Petroleum discovered petroleum resources at the Ekofisk field, which was defined as part of the Norwegian continental shelf. This enabled Norway to run a countercyclical financial policy during the stagflation period in the 1970s. Thus, economic growth was higher and unemployment lower than for most other western countries. However, since the countercyclical policy focused on branch and company subsidies, Norwegian firms soon learned to adapt to policy makers rather than to the markets. Hence, both productivity and business structure did not have the incentives to keep pace with changes in international markets.

Norway lost significant competitive power, and large-scale deindustrialization took place, despite efforts to save manufacturing industry. Another reason for deindustrialization was the huge growth in the profitable petroleum sector. Persistently high oil prices from the autumn 1973 to the end of 1985 pushed labor costs upward, through spillover effects from high wages in the petroleum sector. High labor costs made the Norwegian foreign sector less competitive. Thus, Norway saw deindustrialization at a more rapid pace than most of her largest trading partners. Due to the petroleum sector, however, Norway experienced high growth rates in all the three last decades of the twentieth century, bringing Norway to the top of the world GDP per capita list at the dawn of the new millennium. Nevertheless, Norway had economic problems both in the eighties and in the nineties.

In 1981 a conservative government replaced Labor, which had been in power for most of the post-war period. Norway had already joined the international wave of credit liberalization, and the new government gave fuel to this policy. However, along with the credit liberalization, the parliament still ran a policy that prevented market forces from setting interest rates. Instead they were set by politicians, in contradiction to the credit liberalization policy. The level of interest rates was an important part of the political game for power, and thus, they were set significantly below the market level. In consequence, a substantial credit boom was created in the early 1980s, and continued to the late spring of 1986. As a result, Norway had monetary expansion and an artificial boom, which created an overheated economy. When oil prices fell dramatically from December 1985 onwards, the trade surplus was suddenly turned to a huge deficit (Figure 6).

Figure 6
North Sea Oil Prices and Norway’s Trade Balance, 1975-2000

Figure 6
Source: Statistics Norway

The conservative-center government was forced to keep a tighter fiscal policy. The new Labor government pursued this from May 1986. Interest rates were persistently high as the government now tried to run a trustworthy fixed-currency policy. In the summer of 1990 the Norwegian krone was officially pegged to the ECU. When the international wave of currency speculation reached Norway during autumn 1992 the central bank finally had to suspend the fixed exchange rate and later devaluate.

In consequence of these years of monetary expansion and thereafter contraction, most western countries experienced financial crises. It was relatively hard in Norway. Prices of dwellings slid, consumers couldn’t pay their bills, and bankruptcies and unemployment reached new heights. The state took over most of the larger commercial banks to avoid a total financial collapse.

After the suspension of the ECU and the following devaluation, Norway had growth until 1998, due to optimism, an international boom and high prices of petroleum. The Asian financial crisis also rattled the Norwegian stock market. At the same time petroleum prices fell rapidly, due to internal problems among the OPEC countries. Hence, the krone depreciated. The fixed exchange rate policy had to be abandoned and the government adopted inflation targeting. Along with changes in monetary policy, the center coalition government was also able to monitor a tighter fiscal policy. At the same time interest rates were high. As result, Norway escaped the overheating process of 1993-1997 without any devastating effects. Today the country has a strong and sound economy.

The petroleum sector is still very important in Norway. In this respect the historical tradition of raw material dependency has had its renaissance. Unlike many other countries rich in raw materials, natural resources have helped make Norway one of the most prosperous economies in the world. Important factors for Norway’s ability to turn resource abundance into economic prosperity are an educated work force, the adoption of advanced technology used in other leading countries, stable and reliable institutions, and democratic rule.

References

Basberg, Bjørn L. Handelsflåten i krig: Nortraship: Konkurrent og alliert. Oslo: Grøndahl and Dreyer, 1992.

Bergh, Tore Hanisch, Even Lange and Helge Pharo. Growth and Development. Oslo: NUPI, 1979.

Brautaset, Camilla. “Norwegian Exports, 1830-1865: In Perspective of Historical National Accounts.” Ph.D. dissertation. Norwegian School of Economics and Business Administration, 2002.

Bruland, Kristine. British Technology and European Industrialization. Cambridge: Cambridge University Press, 1989.

Danielsen, Rolf, Ståle Dyrvik, Tore Grønlie, Knut Helle and Edgar Hovland. Norway: A History from the Vikings to Our Own Times. Oslo: Scandinavian University Press, 1995.

Eitrheim. Øyvind, Jan T. Klovland and Jan F. Qvigstad, editors. Historical Monetary Statistics for Norway, 1819-2003. Oslo: Norges Banks skriftserie/Occasional Papers, no 35, 2004.

Hanisch, Tore Jørgen. “Om virkninger av paripolitikken.” Historisk tidsskrift 58, no. 3 (1979): 223-238.

Hanisch, Tore Jørgen, Espen Søilen and Gunhild Ecklund. Norsk økonomisk politikk i det 20. århundre. Verdivalg i en åpen økonomi. Kristiansand: Høyskoleforlaget, 1999.

Grytten, Ola Honningdal. “A Norwegian Consumer Price Index 1819-1913 in a Scandinavian Perspective.” European Review of Economic History 8, no.1 (2004): 61-79.

Grytten, Ola Honningdal. “A Consumer Price Index for Norway, 1516-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 47-98.

Grytten. Ola Honningdal. “The Gross Domestic Product for Norway, 1830-2003.” Norges Bank: Occasional Papers, no. 1 (2004b): 241-288.

Hodne, Fritz. An Economic History of Norway, 1815-1970. Tapir: Trondheim, 1975.

Hodne, Fritz. The Norwegian Economy, 1920-1980. London: Croom Helm and St. Martin’s, 1983.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 19. århundre. Bergen: Fagbokforlaget, 2000.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 20. århundre. Bergen: Fagbokforlaget, 2002.

Klovland, Jan Tore. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 2 (1998):

Klovland, Jan Tore. “Monetary Aggregates in Norway, 1819-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 181-240.

Klovland, Jan Tore. “Historical Exchange Rate Data, 1819-2003”. Norges Bank: Occasional Papers, no. 1 (2004b): 289-328.

Lange, Even, editor. Teknologi i virksomhet. Verkstedsindustri i Norge etter 1840. Oslo: Ad Notam Forlag, 1989.

Nordvik, Helge W. “Finanspolitikken og den offentlige sektors rolle i norsk økonomi i mellomkrigstiden”. Historisk tidsskrift 58, no. 3 (1979): 239-268.

Sejersted, Francis. Demokratisk kapitalisme. Oslo: Universitetsforlaget, 1993.

Søilen. Espen. “Fra frischianisme til keynesianisme? En studie av norsk økonomisk politikk i lys av økonomisk teori, 1945-1980.” Ph.D. dissertation. Bergen: Norwegian School of Economics and Business Administration, 1998.

Citation: Grytten, Ola. “The Economic History of Norway”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-norway/

An Economic History of New Zealand in the Nineteenth and Twentieth Centuries

John Singleton, Victoria University of Wellington, New Zealand

Living standards in New Zealand were among the highest in the world between the late nineteenth century and the 1960s. But New Zealand’s economic growth was very sluggish between 1950 and the early 1990s, and most Western European countries, as well as several in East Asia, overtook New Zealand in terms of real per capita income. By the early 2000s, New Zealand’s GDP per capita was in the bottom half of the developed world.

Table 1:
Per capita GDP in New Zealand
compared with the United States and Australia
(in 1990 international dollars)

US Australia New Zealand NZ as
% of US
NZ as % of
Austrialia
1840 1588 1374 400 25 29
1900 4091 4013 4298 105 107
1950 9561 7412 8456 88 114
2000 28129 21540 16010 57 74

Source: Angus Maddison, The World Economy: Historical Statistics. Paris: OECD, 2003, pp. 85-7.

Over the second half of the twentieth century, argue Greasley and Oxley (1999), New Zealand seemed in some respects to have more in common with Latin American countries than with other advanced western nations. As well as a snail-like growth rate, New Zealand followed highly protectionist economic policies between 1938 and the 1980s. (In absolute terms, however, New Zealanders continued to be much better off than their Latin American counterparts.) Maddison (1991) put New Zealand in a middle-income group of countries, including the former Czechoslovakia, Hungary, Portugal, and Spain.

Origins and Development to 1914

When Europeans (mainly Britons) started to arrive in Aotearoa (New Zealand) in the early nineteenth century, they encountered a tribal society. Maori tribes made a living from agriculture, fishing, and hunting. Internal trade was conducted on the basis of gift exchange. Maori did not hold to the Western concept of exclusive property rights in land. The idea that land could be bought and sold was alien to them. Most early European residents were not permanent settlers. They were short-term male visitors involved in extractive activities such as sealing, whaling, and forestry. They traded with Maori for food, sexual services, and other supplies.

Growing contact between Maori and the British was difficult to manage. In 1840 the British Crown and some Maori signed the Treaty of Waitangi. The treaty, though subject to various interpretations, to some extent regularized the relationship between Maori and Europeans (or Pakeha). At roughly the same time, the first wave of settlers arrived from England to set up colonies including Wellington and Christchurch. Settlers were looking for a better life than they could obtain in overcrowded and class-ridden England. They wished to build a rural and largely self-sufficient society.

For some time, only the Crown was permitted to purchase land from Maori. This land was then either resold or leased to settlers. Many Maori felt – and many still feel – that they were forced to give up land, effectively at gunpoint, in return for a pittance. Perhaps they did not always grasp that land, once sold, was lost forever. Conflict over land led to intermittent warfare between Maori and settlers, especially in the 1860s. There was brutality on both sides, but the Europeans on the whole showed more restraint in New Zealand than in North America, Australia, or Southern Africa.

Maori actually required less land in the nineteenth century because their numbers were falling, possibly by half between the late eighteenth and late nineteenth centuries. By the 1860s, Maori were outnumbered by British settlers. The introduction of European diseases, alcohol, and guns contributed to the decline in population. Increased mobility and contact between tribes may also have spread disease. The Maori population did not begin to recover until the twentieth century.

Gold was discovered in several parts of New Zealand (including Thames and Otago) in the mid-nineteenth century, but the introduction of sheep farming in the 1850s gave a more enduring boost to the economy. Australian and New Zealand wool was in high demand in the textile mills of Yorkshire. Sheep farming necessitated the clearing of native forests and the planting of grasslands, which changed the appearance of large tracts of New Zealand. This work was expensive, and easy access to the London capital market was critical. Economic relations between New Zealand and Britain were strong, and remained so until the 1970s.

Between the mid-1870s and mid-1890s, New Zealand was adversely affected by weak export prices, and in some years there was net emigration. But wool prices recovered in the 1890s, just as new exports – meat and dairy produce – were coming to prominence. Until the advent of refrigeration in the early 1880s, New Zealand did not export meat and dairy produce. After the introduction of refrigeration, however, New Zealand foodstuffs found their way on to the dinner tables of working class families in Britain, but not the tables of the middle and upper classes, as they could afford fresh produce.

In comparative terms, the New Zealand economy was in its heyday in the two decades before 1914. New Zealand (though not its Maori shadow, Aotearoa) was a wealthy, dynamic, and egalitarian society. The total population in 1914 was slightly above one million. Exports consisted almost entirely of land-intensive pastoral commodities. Manufactures loomed large in New Zealand’s imports. High labor costs, and the absence of scale economies in the tiny domestic market, hindered industrialization, though there was some processing of export commodities and imports.

War, Depression and Recovery, 1914-38

World War One disrupted agricultural production in Europe, and created a robust demand for New Zealand’s primary exports. Encouraged by high export prices, New Zealand farmers borrowed and invested heavily between 1914 and 1920. Land exchanged hands at very high prices. Unfortunately, the early twenties brought the start of a prolonged slump in international commodity markets. Many farmers struggled to service and repay their debts.

The global economic downturn, beginning in 1929-30, was transmitted to New Zealand by the collapse in commodity prices on the London market. Farmers bore the brunt of the depression. At the trough, in 1931-32, net farm income was negative. Declining commodity prices increased the already onerous burden of servicing and repaying farm mortgages. Meat freezing works, woolen mills, and dairy factories were caught in the spiral of decline. Farmers had less to spend in the towns. Unemployment rose, and some of the urban jobless drifted back to the family farm. The burden of external debt, the bulk of which was in sterling, rose dramatically relative to export receipts. But a protracted balance of payments crisis was avoided, since the demand for imports fell sharply in response to the drop in incomes. The depression was not as serious in New Zealand as in many industrial countries. Prices were more flexible in the primary sector and in small business than in modern, capital-intensive industry. Nevertheless, the experience of depression profoundly affected New Zealanders’ attitudes towards the international economy for decades to come.

At first, there was no reason to expect that the downturn in 1929-30 was the prelude to the worst slump in history. As tax and customs revenue fell, the government trimmed expenditure in an attempt to balance the budget. Only in 1931 was the severity of the crisis realized. Further cuts were made in public spending. The government intervened in the labor market, securing an order for an all-round reduction in wages. It pressured and then forced the banks to reduce interest rates. The government sought to maintain confidence and restore prosperity by helping farms and other businesses to lower costs. But these policies did not lead to recovery.

Several factors contributed to the recovery that commenced in 1933-34. The New Zealand pound was devalued by 14 percent against sterling in January 1933. As most exports were sold for sterling, which was then converted into New Zealand pounds, the income of farmers was boosted at a stroke of the pen. Devaluation increased the money supply. Once economic actors, including the banks, were convinced that the devaluation was permanent, there was an increase in confidence and in lending. Other developments played their part. World commodity prices stabilized, and then began to pick up. Pastoral output and productivity continued to rise. The 1932 Ottawa Agreements on imperial trade strengthened New Zealand’s position in the British market at the expense of non-empire competitors such as Argentina, and prefigured an increase in the New Zealand tariff on non-empire manufactures. As was the case elsewhere, the recovery in New Zealand was not the product of a coherent economic strategy. When beneficial policies were adopted it was as much by accident as by design.

Once underway, however, New Zealand’s recovery was comparatively rapid and persisted over the second half of the thirties. A Labour government, elected towards the end of 1935, nationalized the central bank (the Reserve Bank of New Zealand). The government instructed the Reserve Bank to create advances in support of its agricultural marketing and state housing schemes. It became easier to obtain borrowed funds.

An Insulated Economy, 1938-1984

A balance of payments crisis in 1938-39 was met by the introduction of administrative restrictions on imports. Labour had not been prepared to deflate or devalue – the former would have increased unemployment, while the latter would have raised working class living costs. Although intended as a temporary expedient, the direct control of imports became a distinctive feature of New Zealand economic policy until the mid-1980s.

The doctrine of “insulationism” was expounded during the 1940s. Full employment was now the main priority. In the light of disappointing interwar experience, there were doubts about the ability of the pastoral sector to provide sufficient work for New Zealand’s growing population. There was a desire to create more industrial jobs, even though there seemed no prospect of achieving scale economies within such a small country. Uncertainty about export receipts, the need to maintain a high level of domestic demand, and the competitive weakness of the manufacturing sector, appeared to justify the retention of quantitative import controls.

After 1945, many Western countries retained controls over current account transactions for several years. When these controls were relaxed and then abolished in the fifties and early sixties, the anomalous nature of New Zealand’s position became more visible. Although successive governments intended to liberalize, in practice they achieved little, except with respect to trade with Australia.

The collapse of the Korean War commodity boom, in the early 1950s, marked an unfortunate turning point in New Zealand’s economic history. International conditions were unpropitious for the pastoral sector in the second half of the twentieth century. Despite the aspirations of GATT, the United States, Western Europe and Japan restricted agricultural imports, especially of temperate foodstuffs, subsidized their own farmers and, in the case of the Americans and the Europeans, dumped their surpluses in third markets. The British market, which remained open until 1973, when the United Kingdom was absorbed into the EEC, was too small to satisfy New Zealand. Moreover, even the British resorted to agricultural subsidies. Compared with the price of industrial goods, the price of agricultural produce tended to weaken over the long term.

Insulation was a boon to manufacturers, and New Zealand developed a highly diversified industrial structure. But competition was ineffectual, and firms were able to pass cost increases on to the consumer. Import barriers induced many British, American, and Australian multinationals to establish plants in New Zealand. The protected industrial economy did have some benefits. It created jobs – there was full employment until the 1970s – and it increased the stock of technical and managerial skills. But consumers and farmers were deprived of access to cheaper – and often better quality – imported goods. Their interests and welfare were neglected. Competing demand from protected industries also raised the costs of farm inputs, including labor power, and thus reduced the competitiveness of New Zealand’s key export sector.

By the early 1960s, policy makers had realized that New Zealand was falling behind in the race for greater prosperity. The British food market was under threat, as the Macmillan government began a lengthy campaign to enter the protectionist EEC. New Zealand began to look for other economic partners, and the most obvious candidate was Australia. In 1901, New Zealand had declined to join the new federation of Australian colonies. Thus it had been excluded from the Australian common market. After lengthy negotiations, a partial New Zealand-Australia Free Trade Agreement (NAFTA) was signed in 1965. Despite initial misgivings, many New Zealand firms found that they could compete in the Australian market, where tariffs against imports from the rest of the world remained quite high. But this had little bearing on their ability to compete with European, Asian, and North American firms. NAFTA was given renewed impetus by the Closer Economic Relations (CER) agreement of 1983.

Between 1973 and 1984, New Zealand governments were overwhelmed by a group of inter-related economic crises, including two serious supply shocks (the oil crises), rising inflation, and increasing unemployment. Robert Muldoon, the National Party (conservative) prime minister between 1975 and 1984, pursued increasingly erratic macroeconomic policies. He tightened government control over the economy in the early eighties. There were dramatic fluctuations in inflation and in economic growth. In desperation, Muldoon imposed a wage and price freeze in 1982-84. He also mounted a program of large-scale investments, including the expansion of a steel works, and the construction of chemical plants and an oil refinery. By means of these investments, he hoped to reduce the import bill and secure a durable improvement in the balance of payments. But the “Think Big” strategy failed – the projects were inadequately costed, and inherently risky. Although Muldoon’s intention had been to stabilize the economy, his policies had the opposite effect.

Economic Reform, 1984-2000

Muldoon’s policies were discredited, and in 1984 the Labour Party came to power. All other economic strategies having failed, Labour resolved to deregulate and restore the market process. (This seemed very odd at the time.) Within a week of the election, virtually all controls over interest rates had been abolished. Financial markets were deregulated, and, in March 1985, the New Zealand dollar was floated. Other changes followed, including the sale of public sector trading organizations, the reduction of tariffs and the elimination of import licensing. However, reform of the labor market was not completed until the early 1990s, by which time National (this time without Muldoon or his policies) was back in office.

Once credit was no longer rationed, there was a large increase in private sector borrowing, and a boom in asset prices. Numerous speculative investment and property companies were set up in the mid-eighties. New Zealand’s banks, which were not used to managing risk in a deregulated environment, scrambled to lend to speculators in an effort not to miss out on big profits. Many of these ventures turned sour, especially after the 1987 share market crash. Banks were forced to reduce their lending, to the detriment of sound as well as unsound borrowers.

Tight monetary policy and financial deregulation led to rising interest rates after 1984. The New Zealand dollar appreciated strongly. Farmers bore the initial brunt of high borrowing costs and a rising real exchange rate. Manufactured imports also became more competitive, and many inefficient firms were forced to close. Unemployment rose in the late eighties and early nineties. The early 1990s were marked by an international recession, which was particularly painful in New Zealand, not least because of the high hopes raised by the post-1984 reforms.

An economic recovery began towards the end of 1991. With a brief interlude in 1998, strong growth persisted for the remainder of the decade. Confidence was gradually restored to the business sector. Unemployment began to recede. After a lengthy time lag, the economic reforms seemed to be paying off for the majority of the population.

Large structural changes took place after 1984. Factors of production switched out of the protected manufacturing sector, and were drawn into services. Tourism boomed as the relative cost of international travel fell. The face of the primary sector also changed, and the wine industry began to penetrate world markets. But not all manufacturers struggled. Some firms adapted to the new environment and became more export-oriented. For instance, a small engineering company, Scott Technology, became a world leader in the provision of equipment for the manufacture of refrigerators and washing machines.

Annual inflation was reduced to low single digits by the early nineties. Price stability was locked in through the 1989 Reserve Bank Act. This legislation gave the central bank operational autonomy, while compelling it to focus on the achievement and maintenance of price stability rather than other macroeconomic objectives. The Reserve Bank of New Zealand was the first central bank in the world to adopt a regime of inflation targeting. The 1994 Fiscal Responsibility Act committed governments to sound finance and the reduction of public debt.

By 2000, New Zealand’s population was approaching four million. Overall, the reforms of the eighties and nineties were responsible for creating a more competitive economy. New Zealand’s economic decline relative to the rest of the OECD was halted, though it was not reversed. In the nineties, New Zealand enjoyed faster economic growth than either Germany or Japan, an outcome that would have been inconceivable a few years earlier. But many New Zealanders were not satisfied. In particular, they were galled that their closest neighbor, Australia, was growing even faster. Australia, however, was an inherently much wealthier country with massive mineral deposits.

Assessment

Several explanations have been offered for New Zealand’s relatively poor economic performance during the twentieth century.

Wool, meat, and dairy produce were the foundations of New Zealand’s prosperity in Victorian and Edwardian times. After 1920, however, international market conditions were generally unfavorable to pastoral exports. New Zealand had the wrong comparative advantage to enjoy rapid growth in the twentieth century.

Attempts to diversify were only partially successful. High labor costs and the small size of the domestic market hindered the efficient production of standardized labor-intensive goods (e.g. garments) and standardized capital-intensive goods (e.g. autos). New Zealand might have specialized in customized and skill-intensive manufactures, but the policy environment was not conducive to the promotion of excellence in niche markets. Between 1938 and the 1980s, Latin American-style trade policies fostered the growth of a ramshackle manufacturing sector. Only in the late eighties did New Zealand decisively reject this regime.

Geographical and geological factors also worked to New Zealand’s disadvantage. Australia drew ahead of New Zealand in the 1960s, following the discovery of large mineral deposits for which there was a big market in Japan. Staple theory suggests that developing countries may industrialize successfully by processing their own primary products, instead of by exporting them in a raw state. Canada had coal and minerals, and became a significant industrial power. But New Zealand’s staples of wool, meat and dairy produce offered limited downstream potential.

Canada also took advantage of its proximity to the U.S. market, and access to U.S. capital and technology. American-style institutions in the labor market, business, education and government became popular in Canada. New Zealand and Australia relied on, arguably inferior, British-style institutions. New Zealand was a long way from the world’s economic powerhouses, and it was difficult for its firms to establish and maintain contact with potential customers and collaborators in Europe, North America, or Asia.

Clearly, New Zealand’s problems were not all of its own making. The elimination of agricultural protectionism in the northern hemisphere would have given a huge boost the New Zealand economy. On the other hand, in the period between the late 1930s and mid-1980s, New Zealand followed inward-looking economic policies that hindered economic efficiency and flexibility.

References

Bassett, Michael. The State in New Zealand, 1840-1984. Auckland: Auckland University Press, 1998.

Belich, James. Making Peoples: A History of the New Zealanders from Polynesian Settlement to the End of the Nineteenth Century, Auckland: Penguin, 1996.

Condliffe, John B. New Zealand in the Making. London: George Allen & Unwin, 1930.

Dalziel, Paul. “New Zealand’s Economic Reforms: An Assessment.” Review of Political Economy 14, no. 2 (2002): 31-46.

Dalziel, Paul and Ralph Lattimore. The New Zealand Macroeconomy: Striving for Sustainable Growth with Equity. Melbourne: Oxford University Press, fifth edition, 2004.

Easton, Brian. In Stormy Seas: The Post-War New Zealand Economy. Dunedin: University of Otago Press, 1997.

Endres, Tony and Ken Jackson. “Policy Responses to the Crisis: Australasia in the 1930s.” In Capitalism in Crisis: International Responses to the Great Depression, edited by Rick Garside, 148-65. London: Pinter, 1993.

Evans, Lewis, Arthur Grimes, and Bryce Wilkinson (with David Teece), “Economic Reform in New Zealand 1984-95: The Pursuit of Efficiency.” Journal of Economic Literature 34, no. 4 (1996): 1856-1902.

Gould, John D. The Rake’s Progress: the New Zealand Economy since 1945. Auckland: Hodder and Stoughton, 1982.

Greasley, David and Les Oxley. “A Tale of Two Dominions: Comparing the Macroeconomic Records of Australia and Canada since 1870.” Economic History Review 51, no. 2 (1998): 294-318.

Greasley, David and Les Oxley. “Outside the Club: New Zealand’s Economic Growth, 1870-1993.” International Review of Applied Economics 14, no. 2 (1999): 173-92.

Greasley, David and Les Oxley. “Regime Shift and Fast Recovery on the Periphery: New Zealand in the 1930s.” Economic History Review 55, no. 4 (2002): 697-720.

Hawke, Gary R. The Making of New Zealand: An Economic History. Cambridge: Cambridge University Press, 1985.

Jones, Steve R.H. “Government Policy and Industry Structure in New Zealand, 1900-1970.” Australian Economic History Review 39, no, 3 (1999): 191-212.

Mabbett, Deborah. Trade, Employment and Welfare: A Comparative Study of Trade and Labour Market Policies in Sweden and New Zealand, 1880-1980. Oxford: Clarendon Press, 1995.

Maddison, Angus. Dynamic Forces in Capitalist Development. Oxford: Oxford University Press, 1991.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

McKinnon, Malcolm. Treasury: 160 Years of the New Zealand Treasury. Auckland: Auckland University Press in association with the Ministry for Culture and Heritage, 2003.

Schedvin, Boris. “Staples and Regions of the Pax Britannica.” Economic History Review 43, no. 4 (1990): 533-59.

Silverstone, Brian, Alan Bollard, and Ralph Lattimore, editors. A Study of Economic Reform: The Case of New Zealand. Amsterdam: Elsevier, 1996.

Singleton, John. “New Zealand: Devaluation without a Balance of Payments Crisis.” In The World Economy and National Economies in the Interwar Slump, edited by Theo Balderston, 172-90. Basingstoke: Palgrave, 2003.

Singleton, John and Paul L. Robertson. Economic Relations between Britain and Australasia, 1945-1970. Basingstoke: Palgrave, 2002.

Ville, Simon. The Rural Entrepreneurs: A History of the Stock and Station Agent Industry in Australia and New Zealand. Cambridge: Cambridge University Press, 2000.

Citation: Singleton, John. “New Zealand in the Nineteenth and Twentieth Centuries”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-new-zealand-in-the-nineteenth-and-twentieth-centuries/

Monopsony in American Labor Markets

William M. Boal, Drake University and Michael R. Ransom, Brigham Young University

What is Labor Monopsony?

The term “monopsony,” first used in print by Joan Robinson (1969, p. 215), means a single buyer in a market. Like a monopolist (a single seller), a monopsonist has power over price through control of quantity. In particular, a monopolist can push the market price of a good down by reducing the quantity it purchases. The tradeoff between price paid and quantity purchased is the supply curve that the monopsonist confronts. A competitive buyer, by contrast, confronts no such tradeoff — it must accept a price determined by the market. A monopsonized market will therefore be characterized by a smaller quantity traded and a lower price than a competitive market with the same demand and other costs of production.

Monopsony power, like monopoly power, results in economic inefficiency. This is because the monopsonist avoids purchasing the last few units of a good whose value to the monopsonist is greater than their marginal cost, in order to hold down the price paid for prior units. In principle, inefficiency from monopsony can be mitigated by a well- placed legal price floor, which removes the monopsonist’s power over price and eliminates its incentive to restrict the quantity it purchases. A modest price floor forces the monopsonist to take price as given and increase its purchases toward the level of competitive buyers. However, if the price floor is too high, the monopsonist will reduce its purchases — just as competitive buyers would do in response to a price floor — and inefficiency recurs.

In labor markets, “buyers” are employers, “sellers” are individual workers, the “good” is time and effort, and the “price” is the going wage or salary level. An employer who enjoys monopsony power holds down the wage by limiting the number of workers it hires. At the resulting inefficient level of employment, the value of the last worker’s contribution to output is greater than the wage she or he receives. This gap was termed the “rate of exploitation” by Pigou (1924, p. 754). It can be shown mathematically that a monopsonist employer will choose a rate of exploitation (expressed as a percent of the wage actually paid) equal to the reciprocal of the elasticity of labor supply (the percent of the workforce lost if wages are cut by one percent). Competitive employers face an infinite elasticity of labor supply — if they cut wages they lose their entire workforce, at least in the long run — and consequently are unable to exploit their workers. Competition thus forces the rate of exploitation to zero. Monopsonist employers by definition face a finite labor supply elasticity. For example, if a monopsonist faces a supply elasticity of five, then five percent of the workforce would be lost if wages were cut one percent, and it can be shown that the monopsonist will choose a rate of exploitation equal to one-fifth or twenty percent. Inefficiency and “exploitation” from labor monopsony can be mitigated by a well-placed minimum wage enforced by the government or perhaps by a labor union. Thus the monopsony model can provide justification for minimum wage laws and unionism because such measures raise pay, increase employment, and improve economic efficiency simultaneously!

Elaborations of the Monopsony Model

Cases of isolated labor markets with only one employer are surely rare, so the model must be elaborated to fit the real world. One elaboration is oligopsony. In oligopsony models, employers hold power over wages if they are few in number. This power might derive from collusion if employers cooperate in setting wages. Or it might derive from inflexibility of their respective workforces. The latter situation, called the Cournot model, implies that an individual employer that cuts wages cannot lose all its employees to its rivals, because those rivals cannot absorb additional employees quickly. Consequently, each individual employer enjoys some power over wages. Both collusion and the Cournot model imply that the greater the concentration of employment in a small number of employers, the lower the wage and the higher the rate of exploitation, holding determinants of labor demand and labor supply constant.

Another elaboration is employer differentiation. If employers differ by location or by working conditions, workers may not treat them as “perfect substitutes.” An employer that cuts wages might lose some workers but not all. Thus an individual employer enjoys power over wages to the extent that its rivals are far away or offer very different jobs. Recent elaborations of the differentiation concept focus on the process by which workers are hired. Models of moving costs emphasize that workers, once hired, may require a substantial wage increase to switch firms. This gives an employer power over wages for its existing workers, but not for new hires. Models of job search emphasize that workers need time to find better jobs. Thus an employer need not match the wages of other employers. However, to maintain a large workforce, an employer must pay better-than-average wages to reduce quits. Note that both moving costs models and job search models imply that workers are more mobile in the long run than in the short run. As Hicks (1932, p. 83) noted, monopsony power depends inversely on “the ease with which [workers] can move, and on the extent to which they and their employers consider the future, or look only to the moment.”

Athletes

A striking example of monopsony in an American labor market is professional baseball. Until 1976, the “reserve clause” in player contracts bound each player to a single team, an extreme form of collusion. As a result, teams did not compete for players. Estimates by Scully (1974) and others indicate that rate of monopsonistic exploitation was very high during this era — players were paid less than half of the value of their contribution to output, and possibly as little as one-seventh. After the reserve clause was eliminated in 1976, players with at least six years’ experience became free to negotiate with other teams. Salaries subsequently soared. By 1989, the rate of exploitation was estimated to have fallen close to zero (Zimbalist, 1992).

The early history of baseball, when rival leagues occasionally appeared, yields similar estimates of monopsony exploitation. Rival leagues undercut the reserve clause, which could only be enforced among teams in the same league. Thus the appearances of the American Association in 1882, the American League in 1901, and the Federal League in 1913 each prompted rapid increases in player salaries. But when the rival league was bought out or merged into the dominant league, salaries always dropped sharply — usually by about a half (Kahn, 2000). This history suggests that, in the absence of rival leagues, early professional baseball players were paid no more than half of the value of their contribution to output.

While no serious rivals to Major League Baseball have appeared since the early twentieth century, rival leagues have frequently appeared in other professional sports. For example, the American Basketball Association challenged the National Basketball Association from 1967 to 1976, the World Hockey Association challenged the National Hockey League from 1971 to 1979, and the United States Football League challenged the National Football League from 1982 to 1985. The appearance of each of these rivals seems to have caused player salaries to increase substantially in their respective sports (Kahn, 2000).

An even more striking example of monopsony is the market for college athletes. These players are clearly employees in all but name, but the National Collegiate Athletic Association strictly limits the amounts that athletes at member colleges and universities can receive. The value of the output of top college football players has been estimated at about $500,000 (Brown, 1993), many times more than such athletes are “paid.”

Teachers and Nurses

For the last few decades, researchers have investigated whether the markets for American school teachers and nurses are characterized by monopsony. Both professions may face a limited number of potential employers in any given geographical region. For teachers, employers are school districts, which are separated by political boundaries. For nurses, the dominant employers are hospitals, which are dispersed geographically except in large metropolitan areas. Moreover, teachers and nurses are (still) predominantly married women, who may find it difficult to move to a new geographic area if their husbands are employed. Researchers have considered both oligopsony and differentiation.

Early investigations measured the relationship between employer concentration and wages. A negative relationship, holding everything else constant, would suggest oligopsony. Several early studies — for example, Luizer and Thornton (1986) for teachers and Link and Landon (1976) for nurses — did in fact find negative relationships. Yet it is unclear whether everything else was held constant in these studies. Highly concentrated markets with small numbers of employers for teachers or nurses tend to be rural areas and small cities. Less concentrated markets with many alternative employers tend to be large cities. But pay for most other occupations — even less specialized ones with many potential employers — is lower in rural areas and small cities, so it is not clear that monopsony is to blame. Indeed, studies by Adamache and Sloan (1982), Boal and Ransom (1999), Hirsch and Schumacher (1995), and others have shown that employer concentration has little effect on the wages of teachers and nurses after controlling for city size or the general wage level.

A more recent investigation by Sullivan (1989) focused on differentiation among employers of nurses. Using data on hospitals, Sullivan estimated that if a hospital cut wages by one percent, it would lose only about 1.3 percent of its nurses immediately. This suggests that hospitals enjoy substantial power over wages. However, Sullivan also showed that a hospital would lose four percent of its nurses within three years and presumably even more in the long run. Sullivan’s estimates imply that if the hospital “considers the future,” it is unlikely to lower wages much more than about 10 percent below the contribution of the marginal nurse to hospital revenue.

Another recent study by Boal (2001) estimated the effects of legal minimum salaries on employment of teachers in two states. That study found that increases in legal minimum salaries tended to decrease employment, suggesting that the market for teachers was more competitive than monopsonistic.

University Professors

Several researchers have suggested that moving costs give a university monopsony power over its existing workforce because professors face moving costs. This is because professors have highly specialized skills and their potential employers (universities) are widely dispersed geographically. Now the market for newly-hired professors is surely competitive, because new hires must pay moving costs no matter who hires them. But the market for existing professors is monopsonized because professors, once hired, may require a substantial wage increase to switch universities. Moreover, since pay is usually adjusted over time for performance, universities cannot promise future salary increases at the time of original hire, as school districts do. Assuming some professors have higher moving costs than others, a modest cut in wages for existing professors will not cause them all to leave.

The model of moving costs predicts a negative relationship between wages and seniority (time spent at the same university). Ransom (1993) measured this relationship, after controlling for total teaching experience, education level, and other factors influencing professors’ productivity. He did find a negative relationship — the penalty for senior professors appeared to be roughly 5 to 15 percent. However, formal models of moving costs imply that newly-hired professors are paid more than the competitive salary level (in anticipation of later exploitation — see Black and Loewenstein, 1991) so not all of this penalty is exploitation.

Miners in Company Towns

Textbooks often cite company towns as classic examples of monopsony, especially towns in the late nineteenth and early twentieth centuries when transportation was expensive. A company town is a small town located in a remote area with only one employer. Company towns were most common in mining, where the town’s location was dictated by mineral deposits. Often the employer owned all the housing and operated all stores and other services in the town. This arrangement might seem to give the employer “control” over its workforce and monopsony power through severe differentiation of employers. However, Fishback (1992) has argued that this arrangement actually reduced living costs for employees by eliminating market imperfections in housing and retail markets. High turnover rates in company towns also cast doubt on the view that workers were “locked in” to their employers (see Boal, 1995).

Company towns were especially widespread in Appalachian coal mining in the early twentieth century. In West Virginia, for example, 79 percent of coal miners lived in company-owned housing in the early 1920s. Nevertheless, Boal (1995) showed that coal mining companies were not very differentiated and enjoyed little power over wages, at least in the long run. A one-percent cut in wages would cause at least two percent of the workforce to be lost the same year, and most of it to be lost in the long run. Thus coal miners seemed to “move with ease.” Assuming employers “considered the future” with discount rates of no more than 10 percent, they would push wages down only about 5 percent, according to his estimates.

Early Textile Mill Workers

Several researchers have investigated whether America’s first factories — New England textile mills — enjoyed monopsony power. Some researchers believe that as these factories grew in size, they were forced to raise wages in order to attract workers from farther away, at least in the early nineteenth century (Lebergott, 1960). Other researchers find no relationship between firm size and wage, but find evidence of collusion by employers in setting wages (Ware, 1966).

Still other researchers have tried to measure the rate of exploitation by comparing the value of the last mill worker’s contribution to output with her wage (most mill workers were women). Implied rates of exploitation range from 9% to over 100% for particular mills in particular years. However, most estimates of the last mill worker’s contribution to output are extremely imprecise, so most calculated rates of exploitation are not significantly different from zero (Vedder, Gallaway, and Klingaman, 1978). Moreover, the largest estimates are for the middle nineteenth century, not the early nineteenth century (Zevin, 1975).

Low-wage Workers

All monopsony models suggest that a modest increase in legal minimum wages should increase employment. In the United States, minimum wages affect only young and unskilled workers. Most studies of the effects of legal minimum wages in the 1970s and early 1980s found small decreases in employment for young unskilled workers, as predicted by the competitive model. However, later studies found almost no effect on employment (see Wellington, 1991) and a few studies found increases in employment as predicted by the monopsony model (see Card and Krueger, 1995). However, these latter studies are controversial (see exchange between Neumark and Wascher, 2000, and Card and Krueger, 2000) and have not convinced the majority of labor economists (see Whaples, 1996). In any case, the rate of exploitation, if positive, is probably small.

The Labor Market in General

Search models suggest that all employers enjoy some monopsony power because workers require time to find better jobs. Formal mathematical models of search, like those of Burdett and Mortensen (1989), imply monopsony power even in the long run and predict that larger firms must pay higher wages. This prediction explains the well-known “firm size-wage effect” — on average, if firm A employs one percent more workers than firm B, it pays 0.01% to 0.03% higher wages for the same kind of workers doing the same kind of jobs (Brown and Medoff, 1989, pp. 1304-1305). Assuming the “firm size-wage effect” is due to monopsony power, firms are pushing wages down by one to three percent below the value of the contribution of the last worker to output.

On the other hand, search models also predict that most firms and jobs pay high wages and only a few pay low wages. This prediction does not fit the facts, even controlling for skill differences across workers. Efforts to fit the model of Burdett and Mortensen (1989) to actual data have been frustrated by this problem. The best such estimates to date suggest that on average wages are pushed down 13 to 15 percent due to search (van den Berg and Ridder, 1993), but these estimates are surely very rough.

Summary

The simple monopsony model provides an alternative explanation to the standard competitive model of how wages are determined. It predicts that employers will hold wages down below the value of the last worker’s contribution to output (“exploitation”) by limiting the number of workers they hire. But it is too simple to fit real American labor markets, so elaborations such as oligopsony or differentiation of employers are needed.

Estimates of monopsony exploitation to date in American labor markets have yielded surprising results (see Table 1 for a rough summary). Monopsony does not appear to have been important in company mining towns, a standard textbook example, or in markets for teachers and nurses, early suspects. In fact, the largest plausible estimates of monopsony exploitation to date are not for blue-collar workers but rather for professional athletes and possibly college professors.

Table 1
Estimated Rates of Monopsonistic Exploitation in American Labor Markets

Labor market Estimated rate of monopsonistic exploitation* Source
Baseball players subject to reserve clause 100% to 600% Scully (1974), Kahn (2000)
Baseball players not subject to reserve clause Close to zero Zimbalist (1992)
Teachers and nurses Close to zero Boal and Ransom (2000),
Hirsch and Schumacher (1995)
University professors Less than 5-15% Ransom (1993)
Coal miners in early twentieth century Less than 5% Boal (1995)
Textile mill workers in the nineteenth century Some likely, but no consensus on magnitude Vedder, Gallaway, and Klingaman (1978), Zevin (1975)
Low-wage workers No consensus
Labor market in general 1% to 3% Brown and Medoff (1989)

References and Further Reading

Adamache, Killard W., and Frank A. Sloan. “Unions and Hospitals: Some Unresolved Issues.” Journal of Health Economics 1, no. 1 (1982): 81-108.

van den Berg, Gerard J. and Geert Ridder. “An Empirical Equilibrium Search Model of the Labour Market.” Vrije Universiteit, Amsterdam, Faculty of Economics and Econometrics Research Memorandum 1993-39, July 1993.

Black, Dan A. and Mark A. Loewenstein. “Self-Enforcing Labor Contracts with Costly Mobility: The Subgame Perfect Solution to the Chairman’s Problem.” Research in Labor Economics 12 (1991): 63-83.

Boal, William M. “The Effect of Minimum Salaries on Employment of Teachers.” Unpublished paper, Drake University, 2001.

Boal, William M., and Michael R. Ransom. “Missouri Teachers.” Unpublished paper, Brigham Young University, 2000.

Boal, William M., and Michael R. Ransom. “Monopsony in the Labor Market.” Journal of Economic Literature 35, no. 1 (1997): 86-112.

Brown, Charles, and James Medoff. “The Employer-Size Wage Effect.” Journal of Political Economy 97, no. 5 (1989): 1027-1059.

Brown, Robert W. “An Estimate of the Rent Generated by a Premium College Football Player.” Economic Inquiry 31, no. 4 (1993): 671-684.

Burdett, Kenneth, and Dale T. Mortensen. “Equilibrium Wage Differentials and Firm Size.” Northwestern Center for Mathematical Studies in Economics and Management Science Working Paper 860, 1989.

Card, David E., and Alan B. Krueger. Myth and Measurement: The New Economics of the Minimum Wage. Princeton, New Jersey: Princeton University Press, 1995.

Card, David E., and Alan B. Krueger. “Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Reply.” American Economic Review 90, no. 5 (2000): 1397-1420.

Fishback, Price V. “The Economics of Company Housing: Historical Perspectives from the Coal Fields.” Journal of Law, Economics, and Organization 8, no. 2 (1992): 346- 365.

Hicks, John R. The Theory of Wages. London: Macmillan, 1932.

Hirsch, Barry T., and Edward Schumacher. “Monopsony Power and Relative Wages in the Labor Market for Nurses.” Journal of Health Economics 14, no. 4 (1995): 443-476.

Kahn, Lawrence M. “The Sports Business as a Labor Market Laboratory.” Journal of Economic Perspectives 14, no. 3 (2000): 75-94.

Lebergott, Stanley. “Wage Trends, 1800-1900.” Studies in Income and Wealth 24 (1960): 449-498.

Link, Charles R., and John H. Landon. “Market Structure, Nonpecuniary Factors and Professional Salaries: Registered Nurses.” Journal of Economics and Business 28, no. 2 (1976): 151-155.

Luizer, James and Robert Thornton. “Concentration in the Labor Market for Public School Teachers.” Industrial and Labor Relations Review 39, no. 4 (1986): 573-84.

Neumark, David and William Wascher. “Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Comment.” American Economic Review 90, no. 5 (2000): 1362-1396.

Pigou, Arthur Cecil. The Economics of Welfare, second edition. London: Macmillan, 1924.

Ransom, Michael R. “Seniority and Monopsony in the Academic Labor Market.” American Economic Review 83, no. 1 (1993): 221-231.

Robinson, Joan. The Economics of Imperfect Competition, second edition. London: Macmillan, 1969.

Scully, Gerald W. “Pay and Performance in Major League Baseball.” American Economic Review 64, no. 6 (1974): 915-930.

Sullivan, Daniel. “Monopsony Power in the Market for Nurses.” Journal of Law and Economics 32, no. 2 part 2 (1989): S135-S178.

Vedder Richard K, Lowell E. Gallaway, and David Klingaman, “Discrimination and Exploitation in Antebellum American Cotton Textile Manufacturing.” Research in Economic History 3 (1978): 217-262.

Ware, Caroline. The Early New England Cotton Manufacturers: A Study of Industrial Beginnings. New York: Russell & Russell, 1966.

Wellington, Alison J., “The Effects of the Minimum Wage on the Employment Status of Youths: An Update,” Journal of Human Resources 26, no. 1 (1991): 27-46.

Whaples, Robert. “Is There Consensus among American Labor Economists? Survey Results on Forty Propositions.” Journal of Labor Research 17, no. 4 (1996): 725-34..

Zevin, Robert B. The Growth of Manufacturing in Early Nineteenth Century New England. New York: Arno Press, 1975.

Zimbalist, Andrew. “Salaries and Performance: Beyond the Scully Model.” In Diamonds Are Forever: The Business of Baseball, edited by Paul M. Sommers, 109-133. Washington DC: Brookings Institution, 1992.

Citation: Boal, William and Michael Ransom. “Monopsony in American Labor Markets”. EH.Net Encyclopedia, edited by Robert Whaples. January 23, 2002. URL http://eh.net/encyclopedia/monopsony-in-american-labor-markets/

Money in the American Colonies

Ron Michener, University of Virginia

“There certainly can’t be a greater Grievance to a Traveller, from one Colony to another, than the different values their Paper Money bears.” An English visitor, circa 1742 (Kimber, 1998, p. 52).

The monetary arrangements in use in America before the Revolution were extremely varied. Each colony had its own conventions, tender laws, and coin ratings, and each issued its own paper money. The monetary system within each colony evolved over time, sometimes dramatically, as when Massachusetts abolished the use of paper money within her borders in 1750 and returned to a specie standard. Any encyclopedia-length overview of the subject will, unavoidably, need to generalize, and few generalizations about the colonial monetary system are immune to criticism because counterexamples can usually be found somewhere in the historical record. Those readers who find their interest piqued by this article would be well advised to continue their study of the subject by consulting the more detailed discussions available in Brock (1956, 1975, 1992), Ernst (1973), and McCusker (1978).

Units of Account

In the colonial era the unit of account and the medium of exchange were distinct in ways that now seem strange. An example from modern times suggests how the ancient system worked. Nowadays race horses are auctioned in England using guineas as the unit of account, although the guinea coin has long since disappeared. It is understood by all who participate in these auctions that payment is made according to the rule that one guinea equals 21s. Guineas are the unit of account, but the medium of exchange accepted in payment is something else entirely. The unit of account and medium of exchange were similarly disconnected in colonial times (Adler, 1900).

The units of account in colonial times were pounds, shillings, and pence (1£ = 20s., 1s. = 12d.).1 These pounds, shillings, and pence, however, were local units, such as New York money, Pennsylvania money, Massachusetts money, or South Carolina money and should not be confused with sterling. To do so is comparable to treating modern Canadian dollars and American dollars as interchangeable simply because they are both called “dollars.” All the local currencies were less valuable than sterling.2 A Spanish piece of eight, for instance, was worth 4 s. 6 d. sterling at the British mint. The same piece of eight, on the eve of the Revolution, would have been treated as 6 s. in New England, as 8 s. in New York, as 7 s. 6 d. in Philadelphia, and as 32 s. 6 d. in Charleston (McCusker, 1978).

Colonists assigned local currency values to foreign specie coins circulating there in these pounds, shillings and pence. The same foreign specie coins (most notably the Spanish dollar) continued to be legal tender in the United States in the first half of the nineteenth century as well as a considerable portion of the circulating specie (Andrews, 1904, pp. 327-28; Michener and Wright, 2005, p. 695). Because the decimal divisions of the dollar so familiar to us today were a newfangled innovation in the early Republic and because the same coins continued to circulate the traditional units of account were only gradually abandoned. Lucius Elmer, in his account of the early settlement of Cumberland County, New Jersey, describes how “Accounts were generally kept in this State in pounds, shillings, and pence, of the 7 s. 6 d. standard, until after 1799, in which year a law was passed requiring all accounts to be kept in dollars or units, dimes or tenths, cents or hundredths, and mills or thousandths. For several years, however, aged persons inquiring the price of an article in West Jersey or Philadelphia, required to told the value in shillings and pence, they not being able to keep in mind the newly-created cents or their relative value . . . So lately as 1820 some traders and tavern keepers in East Jersey kept their accounts in [New] York currency.”3 About 1820, John Quincy Adams (1822) surveyed the progress that had been made in familiarizing the public with the new units:

“It is now nearly thirty years since our new monies of account, our coins, and our mint, have been established. The dollar, under its new stamp, has preserved its name and circulation. The cent has become tolerably familiarized to the tongue, wherever it has been made by circulation familiar to the hand. But the dime having been seldom, and the mille never presented in their material images to the people, have remained . . . utterly unknown. . . . Even now, at the end of thirty years, ask a tradesman, or shopkeeper, in any of our cities, what is a dime or mille, and the chances are four in five that he will not understand your question. But go to New York and offer in payment the Spanish coin, the unit of the Spanish piece of eight [one reale], and the shop or market-man will take it for a shilling. Carry it to Boston or Richmond, and you shall be told it is not a shilling, but nine pence. Bring it to Philadelphia, Baltimore, or the City of Washington, and you shall find it recognized for an eleven-penny bit; and if you ask how that can be, you shall learn that, the dollar being of ninety-pence, the eight part of it is nearer to eleven than to any other number . . .4 And thus we have English denominations most absurdly and diversely applied to Spanish coins; while our own lawfully established dime and mille remain, to the great mass of the people, among the hidden mysteries of political economy – state secrets.”5

It took many more decades for the colonial unit of account to disappear completely. Elmer’s account (Elmer, 1869, p. 137) reported that “Even now, in New York, and in East Jersey, where the eighth of a dollar, so long the common coin in use, corresponded with the shilling of account, it is common to state the price of articles, not above two or three dollars, in shillings, as for instance, ten shillings rather than a dollar and a quarter.”

Not only were the unit of account and medium of exchange disconnected in an unfamiliar manner, but terms such as money and currency did not mean precisely the same thing in colonial times that they do today. In colonial times, “money” and “currency” were practically synonymous and signified whatever was conventionally used as a medium of exchange. The word “currency” today refers narrowly to paper money, but that wasn’t so in colonial times. “The Word, Currency,” Hugh Vance wrote in 1740, “is in common Use in the Plantations . . . and signifies Silver passing current either by Weight or Tale. The same Name is also applicable as well to Tobacco in Virginia, Sugars in the West Indies &c. Every thing at the Market-Rate may be called a Currency; more especially that most general Commodity, for which Contracts are usually made. And according to that Rule, Paper-Currency must signify certain Pieces of Paper, passing current in the Market as Money” (Vance, 1740, CCR III, pp. 396, 431).

Failure to appreciate that the unit of account and medium of exchange were quite distinct in colonial times, and that a familiar term like “currency” had a subtly different meaning, can lead unsuspecting historians astray. They often assume that a phrase such as “£100 New York money” or “£100 New York currency” necessarily refers to £100 of the bills of credit issued by New York. In fact, it simply means £100 of whatever was accepted as money in New York, according to the valuations prevailing in New York.6 Such subtle misunderstandings have led some historians to overestimate the ubiquity of paper money in colonial America.

Means of Payment – Book Credit

While simple “cash-and-carry” transactions sometimes occurred most purchases involved at least short-term book credit; Henry Laurens wrote that before the Revolution it had been “the practice to give credit for one and more years for 7/8th of the whole traffic” (Burnet, 1923, vol. 2, pp. 490-1). The buyer would receive goods and be debited on the seller’s books for an agreed amount in the local money of account. The debt would be extinguished when the buyer paid the seller either in the local medium of exchange or in equally valued goods or services acceptable to the seller. When it was mutually agreeable the debt could be and often was paid in ways that nowadays seem very unorthodox – with the delivery of chickens, or a week’s work fixing fences on land owned by the seller. The debt might be paid at one remove, by the buyer fixing fences on land owned by someone to whom the seller was himself indebted. Accounts would then be settled among the individuals involved. Account books testify to the pervasiveness of this system, termed “bookkeeping barter” by Baxter. Baxter examined the accounts of John Hancock and his father Thomas Hancock, both prominent Boston merchants, whose business dealings naturally involved an atypically large amount of cash. Even these gentlemen managed most of their transactions in such a way that no cash ever changed hands (Baxter, 1965; Plummer, 1942; Soltow, 1965, pp. 124-55; Forman, 1969).

An astonishing array of goods and services therefore served by mutual consent at some time or other to extinguish debt. Whether these goods ought all to be classified as “money” is doubtful; they certainly lacked the liquidity and universal acceptability in exchange that ordinarily defines money. At certain times and in certain colonies, however, specific commodities came to be so widely used in transactions that they might appropriately be termed money. Specie, of course, was such a commodity, but its worldwide acceptance as money made it special, so it is convenient to set it aside for a moment and focus on the others.

Means of Payment – Commodity Money

At various times and places in the colonies such items as tobacco, rice, sugar, beaver skins, wampum, and country pay all served as money. These items were generally accorded a special monetary status by various acts of colonial legislatures. Whether the legislative fiat was essential in monetizing these commodities or whether it simply acknowledged the existing state of affairs is open to question. Sugar was used in the British Caribbean, tobacco was used in the Chesapeake, and rice in South Carolina, each being the central product of their respective plantation economies. Wampum signifies the stringed shells used by the Indians as money before the arrival of European settlers. Wampum and beaver skins were commonly used as money in the northern colonies in the early stages of settlement when the fur trade and Indian trade were still mainstays of the local economy (Nettels, 1928, 1934; Fernow, 1893; Massey, 1976; Brock, 1975, pp. 9-18).

Country pay is more complicated. Where it was used, country pay consisted of a hodgepodge of locally produced agricultural commodities that had been monetized by the colonial legislature. A list of commodities, such as Indian corn, beef, pork, etc. were assigned specific monetary values (so many s. per bushel or barrel), and debtors were permitted by statute to pay certain debts with their choice of these commodities at nominal values set by the colonial legislature.7 In some instances country pay was declared a legal tender for all private debts although contracts explicitly requiring another form of payment might be exempted (Gottfried, 1936; Judd, 1905, pp. 94-96). Sometimes country pay was only a legal tender in payment of obligations to the colonial or town governments. Even where country pay was a legal tender only in payment of taxes it was often used in private transactions and even served as a unit of account. Probate inventories from colonial Connecticut, where country pay was widely used, are generally denominated in country pay (Main and Main, 1988).8

There were predictable difficulties where commodity money was used. A pound in “country pay” was simply not worth a pound in cash even as that cash was valued locally. The legislature sometimes overvalued agricultural commodities in setting their nominal prices. Even when the legislature’s prices were not biased in favor of debtors the debtor still had the power to select the particular commodity tendered and had some discretion over the quality of that commodity. In late 17th century Massachusetts the rule of thumb used to convert country pay to cash was that three pounds in country pay were worth two pounds cash (Republicæ, 1731, pp. 376, 390).9 Even this formula seems to have overvalued country pay. When a group of men seeking to rent a farm in Connecticut offered Boston merchant Thomas Bannister £22 of country pay in 1700, Bannister hesitated. It appears Bannister wanted to be paid £15 per annum in cash. Country pay was “a very uncertain thing,” he wrote. Some years £22 in country pay might be worth £10, some years £12, but he did not expect to see a day when it would fetch fifteen.10 Savvy merchants such as Bannister paid careful attention to the terms of payment. An unwary trader could easily be cheated. Just such an incident occurs in the comic satirical poem “The Sotweed Factor.” Sotweed is slang for tobacco, and a factor was a person in America representing a British merchant. Set in late seventeenth-century Maryland, the poem is a first-person account of the tribulations and humiliations a newly-arrived Briton suffers while seeking to enter the tobacco trade. The Briton agrees with a Quaker merchant to exchange his trade goods for ten thousand weight of oronoco tobacco in cask and ready to ship. When the Quaker fails to deliver any tobacco, the aggrieved factor sues him at the Annapolis court, only to discover that his attorney is a quack who divides his time between pretending to be a lawyer and pretending to be a doctor and that the judges have to be called away from their Punch and Rum at the tavern to hear his case. The verdict?

The Byast Court without delay,
Adjudg’d my Debt in Country Pay:
In Pipe staves, Corn, or Flesh of Boar,
Rare Cargo for the English Shoar.

Thus ruined the poor factor sails away never to return. A footnote to the reader explains “There is a Law in this Country, the Plaintiff may pay his Debt in Country pay, which consists in the produce of the Plantation” (Cooke, 1708).

By the middle of the eighteenth century commodity money had essentially disappeared in northern port cities, but still lingered in the hinterlands and plantation colonies. A pamphlet written in Boston in 1740 observed “Look into our British Plantations, and you’ll see [commodity] Money still in Use, As, Tobacco in Virginia, Rice in South Carolina, and Sugars in the Islands; they are the chief Commodities, used as the general Money, Contracts are made for them, Salaries and Fees of Office are paid in them, and sometimes they are made a lawful Tender at a yearly assigned Rate by publick Authority, even when Silver was promised” (Vance, 1740, CCR III, p. 396). North Carolina was an extreme case. Country pay there continued as a legal tender even in private debts. The system was amended in 1754 and 1764 to require rated commodities to be delivered to government warehouses and be judged of acceptable quality at which point warehouse certificates were issued to the value of the goods (at mandated, not market prices): these certificates were a legal tender (Bullock, 1969, pp. 126-7, 157).

Means of Payment – Bills of Credit

Cash came in two forms: full-bodied specie coins (usually Spanish or Portuguese) and paper money known as “bills of credit.” Bills of credit were notes issued by provincial governments that were similar in many ways to modern paper money: they were issued in convenient denominations, were often a legal tender in the payment of debts, and routinely passed from man to man in transactions.11 Bills of credit were ordinarily put into circulation in one of two ways. The most common method was for the colony to issue bills to pay its debts. Bills of credit were originally designed as a kind of tax-anticipation scrip, similar to that used by many localities in the United States during the Great Depression (Harper, 1948). Therefore when bills of credit were issued to pay for current expenditures a colony would ordinarily levy taxes over the next several years sufficient to call the bills in so they might be destroyed.12 A second method was for the colony to lend newly printed bills on land security at attractive interest rates. The agency established to make these loans was known as a “land bank” (Thayer, 1953).13 Bills of credit were denominated in the £., s., and d. of the colony of issue, and therefore were usually the only form of money in circulation that was actually denominated in the local unit of account.14

Sometimes even the bills of credit issued in a colony were not denominated in the local unit of account. In 1764 Maryland redeemed its Maryland-pound-denominated bills of credit and in 1767 issued new dollar-denominated bills of credit. Nonetheless Maryland pounds, not dollars, remained the predominant unit of account in Maryland up to the Revolution (Michener and Wright, 2006a, p. 34; Grubb; 2006a, pp. 66-67; Michener and Wright, 2006c, p. 264). The most striking example occurred in New England. Massachusetts, Connecticut, New Hampshire, and Rhode Island all had, long before the 1730s, emitted paper money in bills of credit known as “old tenor” bills of credit, and “old tenor” had become the most commonly-used unit of account in New England. The old tenor bills of all four colonies passed interchangeably and at par with one another throughout New England.

Beginning in 1737, Massachusetts introduced a new kind of paper money known as “new tenor.” New tenor can be thought of as a monetary reform that ultimately failed to address underlying issues. It also served as a way of evading a restriction the Board of Trade had placed on the Governor of Massachusetts that limited him to emissions of not more than £30,000. The Massachusetts assembly declared each pound of the new tenor bills to be worth £3 in old tenor bills. What actually happened is that old tenor (abbreviated in records of the time as “O.T.”) continued to be the unit of account in New England, and so long as the old bills continued to circulate, a decreasing portion of the medium of exchange. Each new tenor bill was reckoned at three times its face value in old tenor terms. This was just the beginning of the confusion, for yet newer Massachusetts “new tenor” emissions were created, and the original “new tenor” emission became known as the “middle tenor.”15 The new “new tenor” bills emitted by Massachusetts were accounted in old tenor terms at four times their face value. These bills, like the old ones, circulated across colony borders throughout New England. As if this were not complicated enough, New Hampshire, Rhode Island, and Connecticut all created new tenor emission of their own, and the factors used to convert these new tenor bills into old tenor terms varied across colonies (Davis, 1970; Brock, 1975; McCusker, pp. 131-137). Connecticut, for instance, had a new tenor emission such that each new tenor bill was worth 3½ times its face value in old tenor (Connecticut, vol. 8, pp. 359-60; Brock, 1975, pp. 45-6). “They have a variety of paper currencies in the [New England] provinces; viz., that of New Hampshire, the Massachusetts, Rhode Island, and Connecticut,” bemoaned an English visitor, “all of different value, divided and subdivided into old and new tenors, so that it is a science to know the nature and value of their moneys, and what will cost a stranger some study and application” (Hamilton, 1907, p. 179). Throughout New England, however, Old Tenor remained the unit of account. “The Price of [provisions sold at Market],” a contemporary pamphlet noted, “has been constantly computed in Bills of the old Tenor, ever since the Emission of the middle and new Tenor Bills, just as it was before their Emission, and with no more Regard to or Consideration of either the middle or new Tenor Bills, than if they had never been emitted” (Enquiry, 1744, CCR IV, p. 174). This occurred despite the fact that by 1750 only an inconsiderable portion of the bills of credit in circulation were denominated in old tenor.16

For the most part, bills of credit were fiat money. Although a colony’s treasurer would often consent to exchange these bills for other forms of cash in the treasury, there was rarely a provision in the law stating that holders of bills of credit had a legally binding claim on the government for a fixed sum in specie, and treasurers were sometimes unable to accommodate people who wished to exchange money (Nicholas, 1912, p. 257; The New York Mercury, January 27, 1759, November 24, 1760).17 The form of the bills themselves was sometimes misleading in this respect. It was not uncommon for the bills to be inscribed with an explicit statement that the bill was worth a certain sum in silver. This was often no more than an expression of the assembly’s hope, at the time of issuance, of how the bills would circulate.18 Colonial courts sometimes allowed inhabitants to pay less to royal officials and proprietors by valuing bills of credit used to pay fees, dues, and quit rents according to their “official” rather than actual specie values. (Michener and Wright, 2006c, p. 258, fn. 5; Hart, 2005, pp. 269-71).

Maryland’s paper money was unique. Maryland’s paper money – unlike that of other colonies – gave the possessor an explicit legal claim on a valuable asset. Maryland had levied a tax and invested the proceeds of the tax in London. It issued bills of credit promising a fixed sum in sterling bills of exchange at predetermined dates, to be drawn on the colony’s balance in London. The colony’s accrued balances in London were adequate to fund the redemption, and when redemption dates arrived in 1748 and 1764 the sums due were paid in full so the colony’s pledge was considered credible.

Maryland’s paper money was unique in other ways as well. Its first emission was put into circulation in a novel fashion. Of the £90,000 emitted in 1733, £42,000 was lent to inhabitants, while the other £48,000 was simply given away, at the rate of £1.5 per taxable (McCusker, 1978, pp. 190-196; Brock, 1975, chapter 8; Lester, 1970, chapter 5). Maryland’s paper money was so peculiar that it is unrepresentative of the colonial experience. This was recognized even by contemporaries. Hugh Vance, in the Postscript to his Inquiry into the Nature and Uses of Money, dismissed Maryland as “intirely out of the Question; their Bills being on the Foot of promissory Notes” Vance, 1740, CCR III, p. 462).

In 1690, Massachusetts was the first colony to issue bills of credit (Felt, 1839, pp. 49-52; Davis, 1970, vol. 1, chapter 1; Goldberg, 2009).19 The bills were issued to pay soldiers returning from a failed military expedition against Quebec. Over time, the rest of the colonies followed suit. The last holdout was Virginia, which issued its first bills of credit in 1755 to defray expenses associated with its entry into the French and Indian War (Brock, 1975, chapter 9). The common denominator here is wartime finance, and it is worthwhile to recognize that the vast majority of the bills of credit issued in the colonies were issued during wartime to pay for pressing military expenditures. Peacetime issues did occur and are in some respects quite interesting as they seem to have been motivated in part by a desire to stimulate the economy (Lester, 1970). However, peacetime emissions are dwarfed by those that occurred in war.20 Some historians enamored of the land bank system, whereby newly emitted bills were lent to landowners in order to promote economic development, have stressed the economic development aspect of colonial emissions – particularly those of Pennsylvania – while minimizing the military finance aspect (Schweitzer, 1989, pp. 313-4). The following graph, however, illustrates the fundamental importance of war finance; the dramatic spike marks the French and Indian War (Brock, 1992, Tables 4, 6).

//

ole.gif

That bills in circulation peaked in 1760 reflects the fact that Quebec fell in 1759 and Montreal in 1760, so that the land war in North America was effectively over by 1760.

Because bills were disproportionally emitted for wartime finance it is not surprising that the colonies whose currencies depreciated due to over-issue were those who shared a border with a hostile neighbor – the New England colonies bordering French Canada and the Carolinas bordering Spanish Florida.21 The colonies from New York to Virginia were buffered by their neighbors and therefore issued no more than modest amounts of paper money until they were drawn into the French and Indian War, by which time their economies were large enough to temporarily absorb the issues.

It is important not to confuse the bills of credit issued by a colony with the bills of credit circulating in that colony. “Under the circumstances of America before the war,” a Maryland resident wrote in 1787, “there was a mutual tacit consent that the paper of each colony should be received by its neighbours” (Hanson, 1787, p. 24).22 Between 1710 and 1750, the currencies of Massachusetts, Connecticut, New Hampshire, and Rhode Island passed indiscriminately and at par with one another in everyday transactions throughout New England (Brock, 1975, pp. 35-6). Although not quite so integrated a currency area as New England the colonies of New York, Pennsylvania, New Jersey, and Delaware each had bills of credit circulating within its neighbors’ borders (McCusker, 1978, pp. 169-70, 181-182). In the early 1760s, Pennsylvania money was the primary medium of exchange in Maryland (Maryland Gazette, September 15, 1763; Hazard, 1852, Eighth Series, vol. VII, p. 5826; McCusker, 1978, p. 193). In 1764 one quarter of South Carolina’s bills of credit circulated in North Carolina and Georgia (Ernst, 1973, p. 106). Where the currencies of neighboring colonies were of equal value, as was the case in New England between 1710 and 1750, bills of credit of neighboring colonies could be credited and debited in book accounts at face value. When this was not the case, as when Pennsylvania, Connecticut, or New Jersey bills of credit were used to pay a debt in New York, an adjustment had to be made to convert these sums to New York money. The conversion was usually based on the par values assigned to Spanish dollars by each colony. Indeed, this was also how merchants generally handled intercolonial exchange transactions (McCusker, 1978, p. 123). For example, on the eve of the Revolution a Spanish dollar was rated at 7 s. 6 d. in Pennsylvania money and at 8 s. in New York money. The ratio of eight to seven and a half being equal to 1.06666, Pennsylvania bills of credit were accepted in New York at a 6 and 1/3% advance (Stevens, 1867, pp. 10-11, 18). Connecticut rated the Spanish dollar at 6 s., and because the ratio of eight to six is 1.333, Connecticut bills of credit were accepted at a one third advance in New York (New York Journal, July 13, 1775). New Jersey’s paper money was a peculiar exception to this rule. By the custom of New York’s merchants, New Jersey bills of credit were accepted for thirty years or more at an advance of one pence in the shilling, or 8 and 1/3%, even though New Jersey rated the Spanish dollar at 7 s, 6 d., just as Pennsylvania did. The practice was controversial in New York, and the advance was finally reduced to the “logical” 6 and 2/3% advance by an act of the New York assembly in 1774.23

Means of Payment – Foreign Specie Coins

Specie coins were the other kind of cash that commonly circulated in the colonies. Few specie coins were minted in the colonies. Massachusetts coined silver “pine tree shillings” between 1652 and the closing of the mint in the early 1680s. This was the only mint of any size or duration in the colonies, although minting of small copper coins and tokens did occur at a number of locations (Jordan, 2002; Mossman, 1993). Colonial coinage is interesting numismatically, but economically it was too slight to be of consequence. Most circulating specie was minted abroad. The gold and silver coins circulating in the colonies were generally of Spanish or Portuguese origin. Among the most important of these coins were the Portuguese Johannes and moidore (more formally, the moeda d’ouro) and the Spanish dollar and pistole. The Johanneses were gold coins, 8 escudos (12,800 reis) in denomination; their name derived from the obverse of the coin, which bore the bust of Johannes V. Minted in Portugal and Brazil they were commonly known in the colonies as “joes.” The fractional denominations were 4 escudo and 2 escudo coins of the same origin. The 4 escudo (6,400 reis) coin, or “half joe,” was one of the most commonly used coins in the late colonial period. The moidore was another Portuguese gold coin, 4,000 reis in denomination. That these coins were being used as a medium of exchange in the colonies is not so peculiar as it might appear. Raphael Solomon (1976, p. 37) noted that these coins “played a very active part in international commerce, flowing in and out of the major seaports in both the Eastern and Western Hemispheres.” In the late colonial period the mid-Atlantic colonies began selling wheat and flour to Spain and Portugal “for which in return, they get hard cash” (Lydon, 1965; Virginia Gazette, January 12, 1769; Brodhead, 1853, vol. 8, p. 448).

The Spanish dollar and its fractional parts were, in McCusker’s (1978, p. 7) words, “the premier coin of the Atlantic world in the seventeenth and eighteenth centuries.” Well known and widely circulated throughout the world, its preeminence in colonial North America accounts for the fact that the United States uses dollars, rather than pounds, as its unit of account. The Spanish pistole was the Spanish gold coin most often encountered in America. While these coins were the most common, many others also circulated there (Solomon, 1976; McCusker, 1978, pp. 3-12).

Alongside the well-known gold and silver coins were various copper coins, most notably the English half-pence, that served as small change in the colonies. Fractional parts of the Spanish dollar and the pistareen, a small silver coin of base alloy, were also commonly used as change.24

None of these foreign specie coins were denominated in local currency units, however. One needed a rule to determine what a particular coin, such as a Spanish dollar, was worth in the £., s., and d. of local currency. Because foreign specie coins were in circulation long before any of the colonies issued paper money setting a rating on these coins amounted to picking a numeraire for the economy; that is, it defined what one meant by a pound of local currency. The ratings attached to individual coins were not haphazard: They were designed to reflect the relative weight and purity of the bullion in each coin as well as the ratio of gold to silver prices prevailing in the wider world.

In the early years of colonization these coin values were set by the colonial assemblies (Nettels, 1934, chap. 9; Solomon, 1976, pp. 28-29; John Hemphill, 1964, chapter 3). In 1700 Pennsylvania passed an act raising the rated value of its coins, causing the Governor of Maryland to complain to the Board of Trade of the difficulties this created in Maryland. He sought the Board’s permission for Maryland to follow suit. When the Board investigated the matter it concluded that the “liberty taken in many of your Majesty’s Plantations, to alter the rates of their coins as often as they think fit, does encourage an indirect practice of drawing the money from one Plantation to another, to the undermining of each other’s trade.” In response they arranged for the disallowance of the Pennsylvania act and a royal proclamation to put an end to the practice.25

Queen Anne’s proclamation, issued in 1704, prohibited a Spanish dollar of 17½ dwt. from passing for more than 6 s. in the colonies. Other current foreign silver coins were rated proportionately and similarly prohibited from circulating at a higher value. This particular rating of coins became known as “proclamation money.”26 It might seem peculiar that the// proclamation did not dictate that the colonies adopt the same ratings as prevailed in England. The Privy Council, however, had incautiously approved a Massachusetts act passed in 1697 rating Spanish dollars at 6 s., and attorney general Edward Northey felt the act could not be nullified by proclamation. This induced the Board of Trade to adopt the rating of the Massachusetts act.27

Had the proclamation been put into operation its effects would have been extremely deflationary because in most colonies coins were already passing at higher rates. When the proclamation reached America only Barbados attempted to enforce it. In New York Governor Lord Cornbury suspended its operation and wrote the Board of Trade that he could not enforce it in New York while it was being ignored in neighboring colonies as New York would be “ruined beyond recovery” if he did so (Brodhead, 1853, vol. 4, pp. 1131-1133; Brock, 1975, chapter 4). A chorus of such responses led the Board of Trade to take the matter to Parliament in hopes of enforcing a uniform compliance throughout America (House of Lords, 1921, pp. 302-3). On April 1, 1708, Parliament passed “An Act for ascertaining the Rates of foreign Coins in her Majesty’s Plantations in America” (Ruffhead, vol. 4, pp. 324-5). The act reiterated the restrictions embodied in Queen Anne’s Proclamation, and declared that anyone “accounting, receiving, taking, or paying the same contrary to the Directions therein contained, shall suffer six Months Imprisonment . . . and shall likewise forfeit the Sum of ten Pounds for every such Offence . . .”

The “Act for ascertaining the Rates of foreign Coins” never achieved its desired aim. In the colonies it was largely ignored, and business continued to be conducted just as if the act had never been passed. Pennsylvania, it was true, went though a show of complying but even that lapsed after a while (Brock, 1975, chapter 4). What the act did do, however, was push the process of coin rating into the shadows because it was no longer possible to address it in an open way by legislative enactment. Laws that passed through colonial legislatures (certain charter and proprietary colonies excepted) were routinely reviewed by the Privy Council, and if found to be inconsistent with British law, were declared null and void.

Two avenues remained open to alter coin ratings – private agreements among merchants that would not be subject to review in London, and a legislative enactment so stealthy as to slip through review unnoticed. New York was the first to succeed using stealth. In November 1709 it emitted bills of credit “for Tenn thousand Ounces of Plate or fourteen Thousand Five hundred & fourty five Lyon Dollars” (Lincoln, 1894, vol. 1, chap. 207, pp. 695-7). The Lyon dollar was an obscure silver coin that had escaped being explicitly mentioned in the enumeration of allowable values that had accompanied Queen Anne’s proclamation. Since 15 years previously New York had rated the Lyon dollar at 5 s. 6 d., it was generally supposed that that rating was still in force (Solomon, 1976, p. 30). The value of silver implied in the law’s title is 8 s. an ounce – a value higher than allowed by Parliament. Until 1723, New York’s emission acts contained clauses designed to rate an ounce of silver at 8 s. The act in 1714, for instance, tediously enumerated the denominations of the bills to be printed, in language such as “Five Hundred Sixty-eight Bills, of Twenty-five Ounces of Plate, or Ten Pounds value each” (Lincoln, 1894, vol. 1, chap. 280, pp. 819). When the Board of Trade finally realized what New York was up to it was too late: the earlier laws had already been confirmed. When the Board wrote Governor Hunter to complain, he replied, in part, “Tis not in the power of men or angels to beat the people of this Continent out of a silly notion of their being gainers by the Augmentation of the value of Plate” (Brodhead, vol. 5, p. 476). These colony laws were still thought to be in force in the late colonial period. Gaine’s New York Pocket Almanack for 1760 states that “Spanish Silver . . . here ‘tis fixed by Law at 8 s. per Ounce, but is often sold and bought from 9 s. to 9 s. and 3 d.”

In 1753 Maryland also succeeded using stealth, including revised coin ratings inconsistent with Queen Anne’s proclamation in “An Act for Amending the Staple of Tobacco, for Preventing Fraud in His Majesty’s Customs, and for the Limitation of Officer’s Fees” (McCusker, 1978, p. 192).

The most common subterfuge was for a colony’s merchants to meet and agree on coin ratings. Once the merchants agreed on such ratings, the colonial courts appear to have deferred to them, which is not surprising in light of the fact that many judges and legislators were drawn from the merchants’ ranks (e.g. Horle, 1991). These private agreements effectively nullified not only the act of Parliament but also local statutes, such as those rating silver in New York at 8 s. an ounce. Records of many such agreements have survived.28 There is also testimony that these agreements were commonplace. Lewis Morris remarked that “It is a common practice … [for] the merchants to put what value they think fit upon Gold and Silver coynes current in the Plantations.” When the Philadelphia merchants published a notice in the Pennsylvania Gazette of September 16, 1742 enumerating the values they had agreed to put on foreign gold and silver coins, only the brazenness of the act came as a surprise to Morris. “Tho’ I believe by the merchants private Agreements amongst themselves they have allwaies done the same thing since the Existence of A paper currency, yet I do not remember so publick an instance of defying an act of parliament” (Morris, 1993, vol. 3, pp. 260-262, 273). These agreements, when backed by a strong consensus among merchants, seem to have been effective. Decades later, Benjamin Franklin (1959, vol. 14, p. 232) recollected how the agreement that had offended Morris “had a great Effect in fixing the Value and Rates of our Gold and Silver.”

After the New York Chamber of Commerce was founded in 1768, merchant deliberations on these agreements were recorded. During this period, the coin ratings in effect in New York were routinely published in almanacs, particularly Gaine’s New-York pocket almanac. When the New York Chamber of Commerce resolved to change the rating of coins and the minimum allowable weight for guineas the almanac values changed immediately to reflect those adopted by the Chamber (Stevens, 1867, pp. 56-7. 69).29

ole1.gif

The coin rating table above, reproduced from The New-York Pocket Almanack for the Year 1771 shows how coin-rating worked in practice in the late colonial period. (Note the reference to the deliberations of the Chamber of Commerce.) It shows, for instance, that if you tendered a half joe in payment of debt in Pennsylvania, you would be credited with having paid £3 Pennsylvania money. If the same half joe were tendered in payment of a debt in New York you would be credited with having paid £ 3 4 s. New York money. In Connecticut it would have been £2 8 s. Connecticut money.30

The colonists possessed no central bank and colonial treasurers, however willing they might have been to exchange paper for specie, sometimes found themselves without the means to do so. That these coin ratings were successfully maintained for decades on end was a testament to the public’s faith in the bills of credit, which made them willing to voluntarily exchange them for specie at the established rate. Writing in 1786 and attempting to explain why New Jersey’s colonial bills of credit had retained their value, “Eugenio” attributed their success to the fact that it possessed what he called “the means of instant realization at value.” This awkward phrase signified the bills were instantly convertible at par. “Eugenio” went on to explain why:

“It is true that government did not raise a sum of coin and deposit the same in the treasury to exchange the bills on demand; but the faith of the government, the opinion of the people, and the security of the fund formerly by a well-timed and steady policy, went so hand in hand and so concurred to support each other, that the people voluntarily and without the least compulsion threw all their gold and silver, not locking up a shilling, into circulation concurrently with the bills; whereby the whole coin of the government became forthwith upon an emission of paper, a bank of deposit at every man’s door for the instant realization or immediate exchange of his bill into gold or silver. This had a benign and equitable, a persuasive, a satisfactory, and an extensive influence. If any one doubted the validity or price of his bill, his neighbor immediately removed his doubts by exchanging it without loss into gold or silver. If any one for a particular purpose needed the precious metals, his bill procured them at the next door, without a moment’s delay or a penny’s diminution. So high was the opinion of the people raised, that often an advance was given for paper on account of the convenience of carriage. In the market as well as in the payment of debts, the paper and the coin possessed a voluntary, equal, and concurrent circulation, and no special contract was made which should be paid or whether they should be received at a difference. By this instant realization and immediate exchange, the government had all the gold and silver in the community as effectually in their hands as if those precious metals had all been locked up in their treasury. By this realization and exchange they could extend credit to any degree it was required. The people could not be induced to entertain a doubt of their paper, because the government had never failed them in a single instance, either in war or in peace (New Jersey Gazette, January 30, 1786).”

Insofar as colonial bills of credit were convertible on demand into specie at the rated specie value of coins, there is no mystery as to why those bills of credit maintained their value. How merchants maintained and enforced such accords, however, is relatively inscrutable. Some economists are incredulous that private associations of merchants could accomplish the feat. The best evidence on this question can be found in a pamphlet by a disgruntled inhabitant complaining of the actions of a merchants’ association in Antigua (Anon., 1740), which provides a tantalizing glimpse of the methods merchants used.

Means of Payment – Private debt instruments

This leaves private debt instruments, such as bank notes, bills of exchange, notes of hand, and shop notes. It is sometimes asserted that there were no banks in colonial America, but this is something of an overstatement. There were several experiments made and several embryonic private banks actually got notes into circulation. Andrew McFarland Davis devoted an entire volume to banking in colonial New England (Davis, 1970, vol. 2; Perkins 1991 ). Perhaps the most successful bank of the era was established in South Carolina in 1731. It apparently issued notes totaling £50,000 South Carolina money and operated successfully for a decade.31 However, the banks that did exist did not last long enough or succeed in putting enough notes in circulation for us to be especially concerned about them.

Bills of exchange were similar to checks. A hypothetical example will illustrate how they functioned. The process of creating a bill of exchange began when someone obtained a balance on account overseas (in the case of the colonies, that place was often London). Suppose a Virginia tobacco producer consigned his tobacco to be sold in England, with the sterling proceeds to remain temporarily in the hands of a London merchant. The Virginia planter could then draw on those funds, by writing a bill of exchange payable in London. Suppose further that the planter drew a bill of exchange on his London correspondent, and sold it to a Virginia merchant, who then transmitted it to London to pay a balance due on imported dry goods. When the bill of exchange reached London, the dry goods wholesaler who received it would call on the London merchant holding the funds in order to receive the payment specified in the bill of exchange.

Bills of exchange were widely used in foreign trade, and were the preferred and most common method for paying debts due overseas. Because of the nature of the trade they financed, bills of exchange were usually in large denominations. Also, because bills of exchange were drawn on particular people or institutions overseas, there was an element of risk involved. Perhaps the person drawing the bill was writing a bad check, or perhaps the person on whom the bill was drawn was himself a deadbeat. One needed to be confident of the reputations of the parties involved when purchasing a bill of exchange. Perhaps because of their large denominations and the asymmetric information problems involved, bills of exchange played a limited role as a medium of exchange in the inland economy (McCusker, 1978, especially pp. 20-21).

Small denomination IOUs, called “notes of hand” were widespread, and these were typically denominated in local currency units. For the most part, these were not designed to circulate as a medium of exchange. When someone purchased goods from a shopkeeper on credit, the shopkeeper would generally get a “note of hand” as a receipt. In the court records in the Connecticut archives, one can find the case files for countless colonial-era cases where an individual was sued for nonpayment of a small debt.32 The court records generally include a note of hand entered as evidence to prove the debt. Notes of hand sometimes were proffered to third parties in payment of debt, however, particularly if the issuer was a person of acknowledged creditworthiness (Mather, 1691, p. 191). Some individuals of modest means created notes of hand in small denominations and attempted to circulate them as a medium of exchange; in Pennsylvania in 1768, a newspaper account stated that 10% of the cash offered in the retail trade consisted of such notes (Pennsylvania Chronicle, October 12, 1768; Kimber, 1998, p. 53). Indeed, many private banking schemes, such as the Massachusetts merchants’ bank, the New Hampshire merchants’ bank, the New London Society, and the Land Bank of 1740 were modeled on private notes of hand, and each consisted of an association designed to circulate such notes on a large scale. For the most part, however, notes of hand lacked the universal acceptability that would have unambiguously qualified them as money.

Shop notes were “notes of hand” of a particular type and seem to have been especially widespread in colonial New England. The twentieth-century analogue to shop notes would be scrip issued by an employer that could be used for purchases at the company store.33 Shop notes were I.O.U.s of local shopkeepers, redeemable through the shopkeeper. Such an I.O.U. might promise, for example, £6 in local currency value, half in money and half in goods (Weeden, 1891, vol. 2, p. 589; Ernst, 1990). Hugh Vance described the origins of shop notes in a 1740 pamphlet:

“… by the best Information I can have from Men of Credit then living, the Fact is truly this, viz. about the Year 1700, Silver-Money became exceedingly scarce, and the Trade so embarassed, that we begun to go into the Use of Shop-Goods, as the Money. The Shopkeepers told the Tradesmen, who had Draughts upon them from the Merchants for all Money, that they could not pay all in Money (and very truly) and so by Degrees brought the Tradesmen into the Use of taking Part in Shop-Goods; and likewise the Merchants, who must always follow the natural Course of Trade, were forced into the Way of agreeing with Tradesmen, Fishermen, and others; and also with the Shopkeepers, to draw Bills for Part and sometimes for all Shop-Goods (Vance, 1740, CCR III, pp. 390-91).”

Vance’s account seems accurate in all respects save one. Merchants played an active role in introducing shop notes into circulation. By the 1740s shop notes had been much abused, and it was disingenuous of Vance (himself a merchant) to suggest that merchants had had the system thrust upon them by shopkeepers. Merchants used shop notes to expedite sales and returns. The merchant might contact a shopkeeper and a shipbuilder. The shipbuilder would build a ship for the merchant, the ship to be sent to England and sold as a way of making returns. In exchange the merchant would provide the builder with shop notes and the shopkeeper with imported goods. The builder used the shop notes to pay his workers. The shop notes, in turn, were redeemed at the shop of the shopkeeper when presented to him by workers (Boston Weekly Postboy, December 8, 1740). Thomas Fitch tried to interest an English partner in just such a scheme in 1710:

“Realy it’s extream difficult to raise money here, for goods are generally Sold to take 1/2 money & 1/2 goods again out of the buyers Shops to pay builders of Ships [etc?] which is a great advantage in the readier if not higher sale of goods, as well as that it procures the Return; Wherefore if we sell goods to be paid in money we must give long time or they will not medle (Fitch, 1711, to Edward Warner, November 22, 1710).”

Like other substitutes for cash, shop notes were seldom worth their stated values. A 1736 pamphlet, for instance, reported wages to be 6s in bills of credit, or 7s if paid in shop notes (Anonymous, 1736, p. 143). One reason shop notes failed to remain at par with cash is that shopkeepers often refused to redeem them except with merchandise of their own choosing. Another abuse was to interpret money to mean British goods; half money, half goods often meant no money at all.34

Controversies

Colonial bills of credit were controversial when they were first issued, and have remained controversial to this day. Those who have wanted to highlight the evils of inflation have focused narrowly on the colonies where the bills of credit depreciated most dramatically – those colonies being New England and the Carolinas, with New England being a special focus because of the wealth of material that exists concerning New England history. When Hillsborough drafted a report for the Board of Trade intended to support the abolition of legal tender paper money in the colonies he rested his argument on the inflationary experiences of these colonies (printed in Whitehead, 1885, vol. IX, pp. 405-414). Those who have wanted to defend the use of bills of credit in the colonies have focused on the Middle colonies, where inflation was practically nonexistent. This tradition dates back at least to Benjamin Franklin (1959, vol. 14, pp. 77-87), who drafted a reply to the Board of Trade’s report in an effort to persuade Parliament to repeal of the Currency Act of 1764. Nineteenth-century authors, such as Bullock (1969) and Davis (1970), tended to follow Hillsborough’s lead whereas twentieth-century authors, such as Ferguson (1953) and Schweitzer (1987), followed Franklin’s.

Changing popular attitudes towards inflation have helped to rehabilitate the colonists. Whereas inflation in earlier centuries was rare, and even the mild inflation suffered in England between 1797 and 1815 was sufficient to stir a political uproar, the twentieth century has become inured to inflation. Even in colonial New England between 1711 and 1749, which was thought to have done a disgraceful job in managing its bills of credit, peacetime inflation was only about 5% per annum. Inflation during King George’s War was about 35% per annum.35

Nineteenth-century economists were guilty of overgeneralizing based on the unrepresentative inflationary experiences and associated debtor-creditor conflicts that occurred in a few colonies. Some twentieth-century economists, however, have swung too far in the other direction by generalizing on the basis of the success of the system in the Middle colonies and by attributing the benign outcomes there to the fundamental soundness of the system and its sagacious management. It would be closer to the truth, I believe, to note that the virtuous restraint exhibited by the Middle colonies was imposed upon them. Emissions in these colonies were sometimes vetoed by royal authorities and frequently stymied by instructions issued to royal or proprietary governors. The success of the Middle colonies owes much to the simple fact that they did not exert themselves in war to the extent that their New England neighbors did and that they were not permitted to freely issue bills of credit in peacetime.

A recent controversy has developed over the correct answer to the question – Why did some bills of credit depreciate, while others did not? Many early writers took it for granted that the price level in a colony would vary proportionally with the number of bills of credit the colony issued. This assumption was mocked by Ernst (1973, chapter 1) and devastated by West (1978). West performed simple regressions relating the quantity of bills of credit outstanding to price indices where such data exist. For most colonies he found no correlation between these variables. This was particularly striking because in the Middle colonies there was a dramatic increase in the quantity of bills of credit outstanding during the French and Indian War, and a dramatic decrease afterwards. Yet this large fluctuation seemed to have little effect on the purchasing power of those bills of credit as measured by prices of bills of exchange and the imperfect commodity price indices we possess. Only in New England in the first half of the eighteenth century did there seem to be a strong correlation between bills of credit outstanding and prices and exchange rates. Officer (2005) examined the New England episode and concluded that the quantity theory provides an adequate explanation in this instance, making the contrast with many other colonies (most notably, the Middle colonies) even more remarkable.

Seizing on West’s results Bruce Smith suggested that they disproved the quantity theory of money and provided evidence in favor of an alternative theory of money based on theoretical models of Wallace and Sargent, which Smith characterized as the “backing theory.”36 According to Smith (1985a, p. 534), the redemption provisions enacted when bills of credit were introduced into circulation on tax and loan funds were what prevented them from depreciating. “Just as the value of privately issued liabilities depends on the issuers’ balance sheet,” he wrote, “the same is true for government liabilities. Thus issues of money which are accompanied by increases in the (expected) discounted present value of the government’s revenues need not be inflationary.” One obvious problem with this theory is that the New England bills of credit which did depreciate were issued in exactly the same way. Smith’s answer was that the New England colonies administered their tax and loan funds poorly and New England’s poor administration accounted for the inflation experienced there.

Others who did not wholly agree with Smith – especially his sweeping refutation of the quantity theory – nonetheless pointed to the redemption provisions in explaining why bills of credit often retained their value (Wicker, 1985; Bernholz, 1988; Calomiris, 1988; Sumner, 1993; Rousseau, 2007). Of those who assigned credit to the redemption provisions, however, only Smith grappled with the key question; namely, why essentially identical redemption provisions failed to prevent inflation elsewhere.

Crediting careful administration of tax and loan funds for the steady value of some colonial currencies, and haphazard administration for the depreciation of others looks superficially appealing. The experiences of Pennsylvania and Rhode Island, generally thought to be the most and least successful issuers of colonial bills of credit, fit the hypothesis nicely. However, when one examines other cases, the hypothesis breaks down. Connecticut was generally credited with administering her bills of credit very carefully, yet they depreciated in lockstep with those of her New England neighbors for forty years (Brock, 1975, pp. 43-47). Virginia’s bills of credit retained their value even though Virginia’s colonial treasurer was discovered to have embezzled a sum equal to nearly half of Virginia’s total outstanding bills of credit and returned them to circulation (Michener, 1987, p. 247). North Carolina’s bills of credit held their value well in the late colonial period despite tax administration so notoriously corrupt it led to an armed revolt (Michener, 1987, pp. 248-9, Ernst, 1973, p. 221).

A competing explanation has been offered by Michener (1987, 1988), Brock (1992), McCallum (1992), and Michener and Wright (2006b). According to this explanation, the coin rating system operating in the colonies meant they were effectively on a specie standard with a de facto fixed par of exchange. Provided emissions of paper money did not exceed the amount needed for domestic purposes (“normal real balances,” in McCallum’s terminology) some specie would remain in circulation, prices would remain stable, and the fixed par could be maintained. Where emissions exceeded this bound specie would disappear from circulation and exchange rates would float freely, no longer tethered to the fixed par. Further emissions would cause inflation.37 This was said to account for inflation in New England after 1712, where specie did, in fact, completely disappear from circulation (Hutchinson, 1936, vol. 2, p. 154; Michener, 1987, pp. 288-94). If this explanation is correct, it would suggest that emissions of bills of credit ought to be offset by specie outflows, ceteris paribus.

Critics of the “specie circulated at rated values” explanation have frequently disregarded the ceteris paribus qualification and maintained that the theory implies specie flows always ought to be highly negatively correlated with changes in the quantity of bills of credit. This amounts to assuming the quantity of money demanded per capita in colonial America was nearly constant. If this were a valid test of the theory, one would be forced to reject it, because the specie stock fell little, if at all, in the Middle colonies in 1755-1760 as bills of credit increased, and when bills of credit began to decrease after 1760, specie became scarcer.

The flaw in critics’ reasoning, in my opinion, is that it assumes three unwarranted facts. First, that the demand for money, narrowly defined to mean bills of credit plus specie, was very stable despite the widespread use of bookkeeping barter; Second, that the absence of evidence of large interest rate fluctuations is evidence of the absence of large interest rate fluctuations (Smith, 1985b, pp. 1193, 1198; Letwin, 1982, p. 466); Third, that the opportunity cost of holding money is adequately measured by the nominal interest rate.38

With respect to the first point, colonial wars significantly influenced the demand for money. During peacetime, most transactions were handled by means of book credit. During major wars, however, many men served in the militia. Men in military service were paid in cash and taken far from the community in which their creditworthiness was commonly known, reducing both their need for book credit and their ability to obtain it. Moreover, it would have to give a shopkeeper pause and discourage him from advancing book credit to consider the real possibility that even his civilian customers might find themselves in the militia in the near future and gone from the local community, possibly forever. In each of the major colonial wars there is evidence suggesting an increase in cash real balances that could be attributed to the war’s impact on the book credit system. The increase in real money balances during the French and Indian War and the subsequent decrease can be largely accounted for in this way. With respect to the second point, fluctuations in the money supply are even compatible with a stable demand for money if periods when money is scarce are also periods when interest rates are high, as is also suggested by the historical record.39 It is true that the maximum interest rates specified in colonial usury laws are stable, generally in the range of 6%-8% per annum, often a bit lower late in the colonial era than at its beginning. This has been taken as evidence that colonial interest rates were stable. However, we know that these usury laws were commonly evaded and that market rates were often much higher (Wright, 2002, pp. 19-26). Some indication of how much higher became evident in the summer of 1768 when the Privy Council unexpectedly struck down New Hampshire’s usury law.40 News of the disallowance did not reach New Hampshire until the end of the year, at which time New Hampshire, having sunk the bills of credit issued to finance the French and Indian War during the 5 year interval permitted by the Currency Act of 1751, was in the throes of a liquidity crisis.41 Governor Wentworth reported to the Lords of Trade, that “Interest arose to 30 p. Ct. within six days of the repeal of the late Act.”42 By contrast, when cash was plentiful in Pennsylvania at the height of the French and Indian War, Pennsylvania’s “wealthy people were catching at every opportunity of letting out their money on good security, on common interest [that is, seven per cent].”43 With respect to the third point, the received theory that the nominal interest rate measures the opportunity cost of holding real money balances is derived from models in which individuals are free to borrow and lend at the nominal interest rate. Insofar as lenders respected the usury ceilings, borrowers were unable to borrow freely at the nominal interest rate. Recent work on moral hazard and adverse selection suggest that even private unregulated lenders forced to make loans in an environment characterized by seriously asymmetric information would be wise to ration loans by charging less than market clearing rates and limiting allowed borrowing. The creditworthiness of individuals was more difficult to determine in colonial times than today, and asymmetric information problems were rife. Under such circumstances, even an unregulated market rate of interest (if we had such data, which we don’t) would understate the opportunity cost of holding money for constrained borrowers.

The debate over why some colonial bills of credit depreciated, while others did not has spilled over into another related question: how much cash [i.e., paper money plus specie] circulated in the American colonies, and how much was in bills of credit, and how much was in specie? Clearly, if there was hardly any specie anywhere in colonial America, the concomitant circulation of specie at fixed rates could scarcely account for the stable purchasing power of bills of credit.

Determining how much cash circulated in the colonies is no easy matter, because the amount of specie in circulation is so hard to determine. The issue is further complicated by the fact that the total amount of cash in circulation fluctuated considerably from year to year, depending on such things as the demand for colonial staples and the magnitude of British military expenditure in the colonies (Sachs, 1957; Hemphill, 1964). The mix of bills of credit and specie in circulation was also highly variable. In the Middle colonies – and much of the most contentious debate involves the Middle colonies – the quantity of bills of credit in circulation was very modest (both absolutely and in per-capita terms) before the French and Indian War. The quantity exploded to cover military expenditures during the French and Indian War, and then fell again following 1760, until by the late colonial period, the quantity outstanding was once again very modest. Pennsylvania’s experience is not atypical of the Middle colonies. In 1754, on the eve of the French and Indian War, only £81,500 in Pennsylvania bills of credit were in circulation. At the height of the conflict, in 1760, this had increased to £446,158, but by 1773 the sum had been reduced to only £135,006 (Brock, 1992, Table 6). Any conclusion about the importance of bills of credit in the colonial money supply has to be carefully qualified because it will depend on the year in question.

Traditionally, economic historians have focused their attention on the eve of the Revolution, with a special focus on 1774, because of Alice Hanson Jones’s extensive study of 1774 probate records. Even with the inquiry dramatically narrowed, estimates have varied widely. McCusker and Menard (1985, p. 338), citing Alexander Hamilton for authority, estimated that just before the Revolution the “current cash” totaled 30 million dollars. Of the 30 million dollars, Hamilton said 8 million consisted of specie (27%). On the basis of this authority, Smith (1985a, p. 538; 1988, p. 22) has maintained that specie was a comparatively minor component in the colonial money supply.

Hamilton was arguing in favor of banks when he made this oft-cited estimate, and his purpose in presenting it was to show that the circulation was capable of absorbing a great deal of paper money, which ought to make us wonder whether his estimate might have been biased by his political agenda. Whether biased, or simply misinformed, Hamilton clearly got his facts wrong.

All estimates of the quantity of colonial bills of credit in circulation – including those of Brock (1975, 1992) that have been relied on by recent authors of all sides of the debate – lead inescapably to the conclusion that in 1774 there were very few bills of credit left outstanding, nowhere near the 22 million dollars implied by Hamilton. Calculations along these lines were first performed by Ratchford. Ratchford (1941, pp. 24-25) estimated the total quantity of bills of credit outstanding in each colony on the eve of the Revolution, and then added the local £., s., and d. of all the colonies (a true case of adding apples and oranges), converted to dollars by valuing dollars at 6 s. each, and concluded that the total was equal to about $5.16 million.

Ratchford’s method of summing local pounds and then converting to dollars is incorrect because local pounds did not have a uniform value across colonies. Since dollars were commonly rated at more than 6 s., his procedure resulted in an inflated estimate. We can correct this error by using McCusker’s (1978) data on 1774 exchange rates to convert local currency to sterling for each colony, obtain a sum in pounds sterling, and then convert to dollars using the rated value of the dollar in pounds sterling, 4½ s. Four and a half s. was very near the dollar’s value in London bullion markets in 1774, so no appreciable error arises from using the rated value. Doing so reduces Ratchford’s estimate to $3.42 million. Replacing Ratchford’s estimates of currency outstanding in New York, New Jersey, Pennsylvania, Virginia, and South Carolina with apparently superior data published by Brock (1975, 1992) reduces the total to $2.93 million. Even allowing for some imprecision in the data, this simply can’t be reconciled with Hamilton’s apparently mythical $22 million in paper money!

How much current cash was there in the colonies in 1774? Alice Hanson Jones’s extensive research into probate records gives an independent estimate of the money supply. Jones (1980, table 5.2) estimated that per capita cash-holding in the Middle colonies in 1774 was £1.8 sterling, and that the entire money supply of the thirteen colonies was slightly more than 12 million dollars.44 McCallum (1992) proposed another way to estimate total money balances in the colonies. McCallum started with the few episodes where historians generally agree paper money entirely displaced specie, making the total money supply measurable. He used money balances in these episodes as a basis for estimating money balances in other colonies by deriving approximate measures of the variability of money holdings over colonies and over time. Given the starkly different methodologies, it is remarkable that McCallum’s approach yields an answer practically indistinguishable from Jones’s.45

Various contemporary estimates, including estimates by Pelatiah Webster, Noah Webster, and Lord Sheffield, also suggest the total colonial money supply in 1774 was ten to twelve million dollars, mostly in specie (Michener 1988, p. 687; Elliot, 1845, p. 938). If we tentatively accept that the total money supply in the American colonies in 1774 was about twelve million dollars, and that only three million dollars worth of bills of credit remained outstanding, then fully 75% of the prewar money supply must have been in specie.

Even this may be an underestimate. Colonial probate inventories are notoriously incomplete, and the usual presumption is that Jones’s estimates are likely to be downwardly biased. Two examples not involving money illustrate the general problem. In Jones’s collection of inventories, over 20% of the estates did not include any clothes (Lindert, 1981, p. 657). In an independent survey of Surry County, Virginia probate records, Anna Hawley (1987, pp. 27-8) noted that only 34% of the estates listed hoes despite the fact that the region’s staple crops, corn and tobacco, had to be hoed several times a year.

In Jones’s 1774 database an amazing 70% of all estates were devoid of money. While the widespread use of credit made it possible to do without money in most transactions it is likely some estates contained cash that does not appear in probate inventories. Peter Lindert (1981, p. 658) surmised “cash was simply allocated informally among survivors even before probate took place.” McCusker and Menard (1985, p. 338, fn. 14) concurred noting “cash would have been one of the things most likely to have been distributed outside the usual probate proceedings.” If Jones actually underestimated cash holdings in 1774 the implication would be that more than 75% of the prewar money supply must have been specie.

That most of the cash circulating in the colonies in 1774 must have been specie seems like an inescapable conclusion. The issue has been clouded, however, by the existence of many contradictory and internally inconsistent estimates in the literature. By using them to defend his contention that specie was relatively unimportant, Smith (1988, p. 22) drew attention to these estimates.

The first such estimate was made by Roger Weiss (1970, p. 779), who computed the ratio of paper money to total money in the Middle colonies, using Jones’s probate data to estimate total money balances as has been done here; he arrived at a considerably smaller fraction of specie in the money supply. There is a simple explanation for this puzzling result: Weiss, whose article was published in 1970, based his analysis on Jones’s 1968 dissertation rather than her 1980 book. In her dissertation, Jones (1968, Tables 3 and 4, pp. 50-51) estimated the money supply in the three Middle colonies at £2.0 local currency per free white capita. Since £1 local currency was worth about £0.6 sterling, Weiss began with an estimated total money supply of £1.2 sterling per free white capita (equal to £1.13 per capita), rather than Jones’s more recent estimate of £1.8 sterling per capita.

Another authority is Letwin (1982, p. 467), who estimated that more than 60% of the money supply of Pennsylvania in 1775 was paper. Letwin used the Historical Statistics of the United States for his money supply data, and a casual back-of-the-envelope estimate that nominal balances in Pennsylvania were £700,000 in 1775 to conclude that 63% of Pennsylvania’s money supply was paper money. However, the data in Historical Statistics of the United States are known to be incorrect: Using Letwin’s back-of-the-envelope estimate, but redoing the calculation using Brock’s estimates of paper money in circulation, gives the result that in 1775 only 45.5% of Pennsylvania’s money supply was paper money; for 1774 the figure is 31%.46

That good faith attempts to estimate the stock of specie in the colonies in 1774 have given rise to such wildly varying and inconsistent estimates gives some indication of the task that remains to be accomplished.47 Many hints about how the specie stock varied over time in colonial America can be found in newspapers, legislative records, pamphlets and correspondence. Organizing those fragments of evidence and interpreting them is going to require great skill and will probably have to be done colony by colony. In addition, if the key to the purchasing power of colonial currency lies in the ratings attached to coins as I personally believe it does, then more effort is going to have to be paid in the future to tracking how those ratings evolved over time. Our knowledge at the moment is very fragmentary, probably because the politics of paper money has so engrossed the attention of historians that few people have attached much significance to coin ratings.

Economic historian Farley Grubb has proposed (2003, 2004, 2007) that the composition of the medium of exchange in colonial America and the early Republic can be determined from the unit of account used in arm’s length transactions, such as rewards offered in runaway ads and prices recorded in indentured servant contract registrations. If, for instance, a runaway reward is offered in pounds, shillings and pence, it means (Grubb argues) that colonial or state bills of credit were the medium of exchange used, while dollar rewards in such ads would imply silver. Grubb then uses contract registrations in the early Republic (2003, 2007) and runaway ads in colonial Pennsylvania (2004) to develop time series for hitherto unmeasurable components of the money supply and draws many striking conclusions from them. I believe Grubb is proceeding on a mistaken premise. Reversing Grubb’s procedure and using runaway ads in the early Republic and contract registrations in colonial Pennsylvania yields dramatically different results, which suggests the method is not useful. I have participated in this contentious published debate (see Michener and Wright 2005, 2006a, 2006c and Grubb 2003, 2004, 2006a, 2006b, 2007) and will leave it to the reader to draw his or her own conclusions.

Notes:

1. Beginning in 1767, Maryland issued bills of credit denominated in dollars (McCusker, 1978, p. 194).

2. For a number of years, Georgia money was an exception to this rule (McCusker, 1978, pp. 227-8).

3. Elmer (1869, p. 137). Similarly, historian Robert Shalhope (Shalhope, 2003, pp. 140, 142, 147, 290) documents a Vermont farmer who continued to reckon, at least some of the time, in New York currency (i.e. 8 shillings = $1) well into the 1820s.

4. To clarify: In New York, a dollar was rated at eight shillings, hence one reale, an eighth of a dollar, was one shilling. In Richmond and Boston, the dollar was rated at six shillings, or 72 pence, one eighth of which is 9 pence. In Philadelphia and Baltimore, the dollar was rated at seven shillings six pence, or ninety pence, and an eighth of a dollar would be 11.25 pence.

5. In 1822, for example, P. T. Barnum, then a young man from Connecticut making his first visit to New York, paid too much for a brace of oranges because of confusion over the unit of account. “I was told,” he later related, “[the oranges] were four pence apiece [as Barnum failed to realise, in New York there were 96 pence to the dollar], and as four pence in Connecticut was six cents, I offered ten cents for two oranges, which was of course readily taken; and thus, instead of saving two cents, as I thought, I actually paid two cents more than the price demanded” (Barnum, 1886, p. 18).

6. One way to see the truth of this statement is to examine colonial records predating the emission of colonial bills of credit. Virginia pounds are referred to long before Virginia issued its first bills of credit in 1755. See, for example, Pennsylvania Gazette, September 20, 1736, quoting Votes of the House of Burgesses in Virginia, August 30, 1736 or the Pennsylvania Gazette, May 29, 1746, quoting a runaway ad that mentions “a bond from a certain Fielding Turner to William Williams, for 42 pounds Virginia currency.” Advertisements in the Philadelphia newspapers in 1720 promise rewards for the return of runaway servants and slaves in Pennsylvania pounds, even though Pennsylvania did not issue its first bills of credit until 1723. The contemporary meaning of “currency” sheds light on otherwise confusing statements, such as an ad in the Pennsylvania Gazette, May 12, 1763, where the advertiser offered a reward for the recovery of £460 “New York currency” that was stolen from him and then parenthetically noted “the greatest part of said Money was in Jersey Bills.”

7. For an example of a complete list, see Felt (1839, pp. 82-83).

8. Further discussion of country pay in Connecticut can be found in Bronson (1865, pp. 23-4).

9. Weiss (1974, pp. 580-85) cites a passage from an 1684 court case that appears to contradict this discount. However, inspecting the court records shows that the initial debt consisted of 34s. 5d. in money to which the court added 17s. 3d. to cover the difference between money and country pay, a ratio of pay to money of exactly 3 to 2 (Massachusetts, 1961, pp. 303-4). Other good illustrations of the divergence of cash and country pay prices can be found in Knight (1935, pp. 40-1) and Judd (1905, pp. 95-6). The multiple price system was not limited to Massachusetts and Connecticut (Coulter, 1944, p. 107).

10. Thomas Bannister to Mr. Joseph Thomson, March 8, 1699/1700 in (Bannister, 1708).

11. In New York, for instance, early issues were legal tender, but the Currency Act of 1764 put a halt to new issues of legal tender paper money; the legal tender status of practically all existing issues expired in 1768. After prolonged and contentious negotiation with imperial authorities, the Currency Act of 1770 permitted New York to issue paper money that was a legal tender in payments to the colonial government, but not in private transactions. New York made its first issue under the terms of the Currency Act of 1770 in early 1771 (Ernst, 1973).

12. Ordinarily, but not always. For instance, in 1731 South Carolina reissued £106,500 in bills of credit without creating any tax fund with which to redeem them (Nettels, 1934, pp. 261-2; Brock, 1975, p. 123). The Board of Trade repeatedly pressured the colony to create a tax fund for this purpose, but without success. That no tax funds had been earmarked to redeem these bills was common knowledge, but it did not make the bills less acceptable as a medium of exchange, or adversely affect their value. The episode contradicts the common supposition that the promise of future redemption played a key role in determining the value of colonial currencies.

13. Once the bills of credit were placed in circulation, no distinction was made between them based on how they were originally issued. It is not as if one could only pay taxes with bills of the first sort, or repay mortgages with bills of the second sort. Many colonies, to save the cost of printing, would reuse worn but serviceable notes. A bill originally issued on loan, upon returning to the colonial treasury, might be reissued on tax funds; often it would have been impossible, even in principle, for an individual to examine the bills in his possession and deduce the funds ostensibly backing them.

14. Late in the seventeenth century Massachusetts briefly operated a mint that issued silver coins denominated in the local unit of account (Jordon, 2002). On the eve of the Revolution, Virginia obtained official permission to have copper coins minted for use in Virginia (Davis, 1970, vol. 1, chapter 2; Newman, 1956).

15. The Massachusetts government, unable to honor redemption promises made when the first new tenor emission was first created, decided in 1742 to revalue these bills from three to one to four to one with old tenor as compensation. When Massachusetts returned to a specie standard, the remaining middle tenor bills were redeemed at four to one (Davis, 1970; McCusker, 1978, p. 133).

16. New and old tenors have led to much confusion. In the Boston Weekly News Letter, July 1, 1742, there is an ad pertaining to someone who mistakenly passed Rhode Island New Tenor in Boston at three to one, when it was supposed to be valued at four to one. Modern day historians have also occasionally been misled. An excellent example can be found in Patterson (1961, p. 27). Patterson believed he had unearthed evidence of outrageous fraud during the Massachusetts currency reform, whereas he had, in fact, simply failed to convert a sum in an official document stated in new tenor terms into appropriate old tenor terms. Sufro (1976, p. 247) following Patterson, made similar accusations based on a similar misunderstanding of New England’s monetary units.

17. That colonial treasurers did not unfailingly provide this service is implicit in statements found in merchant letters complaining of how difficult it sometimes became to convert paper money to specie (Beekman to Evan and Francis Malbone, March 10, 1769, White, 1956, p. 522).

18. Nathaniel Appleton (1748) preached a sermon excoriating the province of Massachusetts Bay for flagrantly failing to keep the promises inscribed on the face of its bills of credit.

19. Goldberg (2009) uses circumstantial evidence to suggest that Massachusetts was engaged in a “monetary ploy to fool the king” when it made its first emissions. In Goldberg’s telling of the tale, the king had been furious about the Massachusetts mint and officially issuing paper money that was a full legal tender would have been a “colossal mistake” because it would have endangered the colony’s effort to obtain a new charter, which was essential to confirm the land grants the colony had already made. The alleged ploy Goldberg discovered was a provision passed shortly afterwards: “Ordered that all country pay with one third abated shall pass as current money to pay all country’s debts at the same prices set by this court.” Since those with a claim on the Treasury were going to be tendered either paper money or country pay, and since Goldberg interprets this as requiring those creditors to accept either 3 pounds in paper money or 2 pounds in country pay, the provision was, in Goldberg’s estimation, a way of forcing the paper money on the populace at a one third discount. The shortchanging of the public creditors, through some mechanism not adequately explained to my understanding, was sufficient to make the new paper money a defacto legal tender.

There are several problems with Goldberg’s analysis. Jordan (2002, pp. 36-45) has recently written the definitive history of the Massachusetts mint, and he minutely reviews the evidence pertaining to the Massachusetts mint and British reaction to it. He concludes that “there was no concerted effort by the king and his ministers to crush the Massachusetts mint.” In 1692 Massachusetts obtained a new charter and passed a law making the bills of credit a legal tender. The new charter required Massachusetts to submit all its laws to London for review, yet the imperial authorities quietly ratified the legal tender law, even though they were fully empowered to veto it, which seems very peculiar if the legal tender status of the bills was as unpopular with the King and his ministers as Goldberg maintains. The smoking gun Goldberg cites appears to me to be no more than a statement of the “three pounds of country pay equals two pounds cash” rule that prevailed in Massachusetts in the late seventeenth century. In his argument, Goldberg tacitly assumes that a pound of country pay was equal in value to a pound of hard money; he observes that the new bills of credit initially circulated at a one third discount (with respect to specie) and that this might have arisen because recipients (according to his interpretation) were offered only two pounds of country pay in lieu of three pounds of bills of credit (Goldberg, p. 1102). However, because country pay itself was worth, at most, two thirds of its nominal value in specie, by Goldberg’s reasoning paper money should have been at a discount of at least five ninths with respect to specie.

The paper money era in Massachusetts brought forth approximately fifty pamphlets and hundreds of newspaper articles and public debates in the Assembly, none of which confirm Goldberg’s inference.

20. The role bills of credit played as a means of financing government expenditures is discussed in Ferguson (1953).

21. Georgia was not founded until 1733, and one reason for its founding was to create a military buffer to protect the Carolinas from the Spanish in Florida.

22. Grubb (2004, 2006a, 2006b) argues that bills of credit did not commonly circulate across colony borders. Michener and Wright (2006a, 2006c) dispute Grubb’s analysis and provide (Michener and Wright 2006a, pp. 12-13, 24-30) additional evidence of the phenomenon.

23. Poor Thomas Improved: Being More’s Country Almanack for … 1768 gives as a rule that “To reduce New-Jersey Bills into York Currency, only add one penny to every shilling, and the Sum is determined.” (McCusker, 1978, pp. 170-71; Stevens, 1867, pp. 151-3, 160-1, 168, 185-6, 296; Lincoln, 1894, vol. 5, Chapter 1654, pp. 638-9.)

24. In two articles, John R. Hanson (1979, 1980) argued that bills of credit were important to the colonial economy because they provided much-needed small denomination money. His analysis, however, completely ignores the presence of half-pence, pistareens, and fractional denominations of the Spanish dollar. The Spanish minted halves, quarters, eighths, and sixteenths of the dollar, which circulated in the colonies (Solomon, 1976, pp. 31-32). For a good introduction to small change in the colonies, see Andrews (1886), Newman (1976), Mossman (1993, pp. 105-142), and Kays (2001).

25. Council of Trade and Plantations to the Queen, November 23, 1703, in Calendar of State Papers, 1702-1703, entry #1299. Brock, 1975, chap. 4.

26. This, it should be noted, is what British authorities meant by “proclamation money.” Since salaries of royal officials, fees, quit rents, etc. were often denominated in proclamation money, colonial courts often found a rationale to attach their own interpretation to “proclamation money” so as to reduce the real value of such salaries and fees. In New York, for example, eight shillings in New York’s bills of credit were ostensibly worth one ounce of silver although by the late colonial period they were actually worth less. This valuation of bills of credit made each seven pounds of New York bills of credit in principle worth six pounds in proclamation money. The New York courts used that fact to establish the rule that seven pounds in New York currency could pay a debt of six pounds proclamation money. This rule allowed New Yorkers to pay less in real terms than was contemplated by the British (Hart, 2005, pp. 269-71).

27. Brock (1975). The text of the proclamation can be found in the Boston New-Letter, December 11, 1704. To be precise, the Proclamation rate was actually in slight contradiction to that in the Massachusetts law, which had rated a piece of eight weighing 17 dwt. at 6 s. See Brock (1975, p. 133, fn. 7).

28. This contention has engendered considerable controversy, but the evidence for it seems to me both considerable and compelling. Apart from evidence cited in the text, see for Massachusetts, Michener (1987, p. 291, fn. 54), Waite Winthrop to Samuel Reade, March 5, 1708 and Wait Winthrop to Samuel Reade, October 22, 1709 in Winthrop (1892, pp. 165, 201); For South Carolina see South Carolina Gazette, May 14, 1753; August 13, 1744; and Manigualt (1969, p. 188); For Pennsylvania see Pennsylvania Gazette, April 2, 1730, December 3, 1767, February 15, 1775, March 8, 1775; For St. Kitts see Roberdeau to Hyndman & Thomas, October 16, 1766, in Roberdeau (1771); For Antigua, see Anonymous (1740).

29. The Chamber of Commerce adopted its measure in October 1769, apparently too late in the year to appear in the “1770” almanacs, which were printed and sold in late 1769. The 1771 almanacs, printed in 1770, include the revised coin ratings.

30. Note that the relative ratings of the half joe are aligned with the ratings of the dollar. For example, the ratio of the New York value of the half joe to the Pennsylvania value is 64 s./60 s. = 1.066666, and the ratio of the New York value of the half joe to the Connecticut value is 64 s./48 s. = 1.3333.

31. This bank has been largely overlooked, but is well documented. Letter of a Merchant in South Carolina to Alexander Cumings, Charlestown, May 23, 1730, South Carolina Public Records, Vol XIV, pp. 117-20; Anonymous (1734); Easterby (1951, [March 5, 1736/37] vol. 1, pp. 309-10); Governor Johnson to the Board of Trade in Calendar of State Papers, 1731, entry 488, p. 342; Whitaker (1741, p. 25); and Vance (1740, p. 463).

32.I base this on my own experience reviewing the contents of RG3 Litchfield County Court Files, Box 1 at the Connecticut State Library.

33. Though best documented in New England, Benjamin Franklin (1729, CCR II, p. 340) mentions their use in Pennsylvania.

34. See Douglass (1740, CCR III, pp. 328-329) and Vance (1740, CCR III, pp. 328-329). Douglass and Vance disagreed on all the substantive issues, so that their agreement on this point is especially noteworthy. See also Boston Weekly Newsletter, Feb. 12-19, 1741.

35. Data on New England prices during this period are very limited, but annual data exist for wheat prices and silver prices. Regressing the log of these prices on time yields an annual growth rate of prices approximately that mentioned in the text. The price data leave much to be desired, and the inflation estimates should be understood as simply a crude characterization. However, it does show that New England’s peacetime inflation during this era was not so extreme as to shock modern sensibilities.

36. Smith (1985a, 1985b). The quantity theory holds that the price level is determined by the supply and demand for money – loosely, how much money is chasing how many goods. Smith’s version of the backing theory is summarized by the passage quoted from his article.

37. John Adams explained this very clearly in a letter written June 22, 1780 to Vergennes (Wharton, vol. 3, p. 811). Adams’s “certain sum” and McCallum’s “normal real balances” are essentially the same, although Adams is speaking in nominal and McCallum in real terms.

A certain sum of money is necessary to circulate among the society in order to carry on their business. This precise sum is discoverable by calculation and reducible to certainty. You may emit paper or any other currency for this purpose until you reach this rule, and it will not depreciate. After you exceed this rule it will depreciate, and no power or act of legislation hitherto invented will prevent it. In the case of paper, if you go on emitting forever, the whole mass will be worth no more than that was which was emitted within the rule.

38. One of the principle observations Smith (1985b, p. 1198) makes in dismissing the possible importance of interest rate fluctuations is “it is known that sterling bills of exchange did not circulate at a discount.” Sterling bills were payable at a future date, and Smith presumably means that sterling bills should have been discounted if interest made an appreciable difference in their market value. Sterling bills, however, were discounted. These bills were not payable at a particular fixed date, but rather on a certain number of days after they were first presented for payment. For example, a bill might be payable “on sixty days sight,” meaning that once the bill was presented (in London, for example, to the person upon whom it was drawn) the person would have sixty days in which to make payment. Not all bills were drawn at the same sight, and sight periods of 30, 60, and 90 days were all common. Bills payable sooner sold at higher prices, and bills could be and sometimes were discounted in London to obtain quicker payment (McCusker, 1978, p. 21, especially fn. 25; David Vanhorne to Nicholas Browne and Co., October 3, 1766. Brown Papers, P-V2, John Carter Brown Library). In the early Federal period many newspapers published extensive prices current that included prices of bills drawn on 30, 60, and 90 days’ sight.

39. Franklin (1729) wrote a tract on colonial currency, in which he maintained as one of his propositions that “A great Want of Money in any Trading Country, occasions Interest to be at a very high Rate.” An anonymous referee warned that when colonists complained of a “want of money” that they were not complaining of a lack of a circulating medium per se, but were expressing a desire for more credit at lower interest rates. I do not entirely agree with the referee. I believe many colonists, like Franklin, reasoned like modern-day Keynesians, and believed high interest rates and scarce credit were caused by an inadequate money supply. For more on this subject, see Wright (2002, chapter 1).

40. Public Record Office, CO 5/ 947, August 13, 1768, pp. 18-23.

41. New Hampshire Gazette and Historical Chronicle, January 13, 1769.

42. Public Record Office, Wentworth to Hillsborough, CO 5/ 936, July 3, 1769.

43. Pennsylvania Chronicle, and Universal Advertiser, 28 December 1767.

44. This should be understood to be paper money and specie equal in value to 12 million dollars, not 12 million Spanish dollars. The fraction of specie in the money supply can’t be directly estimated from probate records. Jones (1980, p. 132) found that “whether the cash was in coin or paper was rarely stated.”

45. McCallum deflated money balances by the free white population rather than the total population. Using population estimates to put the numbers on a comparable basis reveals how close McCallum’s estimates are to those of Jones. For example, McCallum’s estimate for the Middle colonies, converted to a per-capita basis, is approximately £1.88 sterling.

46. This incident illustrates how mistakes about colonial currency are propagated and seem never to die out. Henry Phillips 1865 book presented data on Pennsylvania bills of credit outstanding. One of his major “findings” was that Pennsylvania retired only £25,000 between 1760 and 1769. This was a mistake: Brock (1992, table 6) found £225,247 had been retired over the same period. Because of the retirements Phillips missed, he overestimated the quantity of Pennsylvania bills of credit in circulation in the late colonial period by 50 to 100%. Lester (1939, pp. 88, 108) used Phillips’s series; Ratchford (1941) obtained his data from Lester. Through Ratchford, Phillips’s series found its way into Historical Statistics of the United States.

47. Benjamin Allen Hicklin (2007) maintains that generations of historians have exaggerated the scarcity of specie in seventeenth and early eighteenth century Massachusetts. Hicklin’s analysis illustrates the unsettled state of our knowledge about colonial specie stocks.

References:

Adams, John Q. “Report upon Weights and Measures.” Reprinted in The North American Review, Boston: Oliver Everett, vol. 14 (New Series, Vol. 5) (1822), pp. 190-230.

Adler, Simon L. Money and Money Units in the American Colonies, Rochester NY: Rochester Historical Society, 1900.

Andrew, A. Piatt. “The End of the Mexican Dollar.” Quarterly Journal of Economics, vol. 18, no. 3 (1904), pp. 321-56.

Andrews, Israel W. “McMaster on our Early Money,” Magazine of Western History, vol. 4 (1886), pp. 141-52.

Anonymous. An Essay on Currency, Charlestown, South Carolina: Printed and sold by Lewis Timothy, 1734.

Anonymous. Two Letters to Mr. Wood on the Coin and Currency in the Leeward Islands, &c. London: Printed for J. Millan, 1740.

Anonymous. “The Melancholy State of this Province Considered,” Boston, 1736, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol III, pp. 135-147.

Appleton, Nathaniel. The Cry of Oppression, Boston: J. Draper, 1748.

Bannister, Thomas. Thomas Bannister letter book, 1695-1708, MSS, Newport Historical Society, Newport, RI.

Barnum, Phineas T. The Life of P.T. Barnum, Buffalo: The Courier Company Printers, 1886.

Baxter, William. The House of Hancock, New York: Russell and Russell, Inc., 1965.

Bernholz, Peter. “Inflation, Monetary Regime and the Financial Asset Theory of Money,” Kyklos, vol. 41, fasc. 1 (1988), pp. 5-34.

Brodhead, John R. Documents Relative to the Colonial History of the State of New York, Albany, NY: Weed Parsons, Printers, 1853.

Brock, Leslie V. Manuscript for a book on Currency, Brock Collection, Accession number 10715, microfilm reel #M1523, Alderman Library special collections, University of Virginia, circa 1956. This book was to be the sequel to Currency of the American Colonies, carrying the story to 1775.

Brock, Leslie V. The Currency of the American Colonies, 1700-1764, New York: Arno Press, 1975.

Brock, Leslie V. “The Colonial Currency, Prices, and Exchange Rates,” Essays in History, vol. 34 (1992), 70-132. This article contains the best available data on colonial bills of credit in circulation.

Bronson, Henry. “A Historical Account of Connecticut Currency, Colonial Money, and Finances of the Revolution,” Printed in New Haven Colony Historical Papers, New Haven, vol. 1, 1865.

Bullock, Charles J. Essays on the Monetary History of the United States, New York: Greenwood Press, 1969.

Burnett, Edmund C. Letters to Members of the Continental Congress, Carnegie Institution of Washington Publication no. 299, Papers of the Dept. of Historical Research, Gloucester, MA: P. Smith, 1963.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental,” Journal of Economic History, 48 (1988), pp. 47-68

Cooke, Ebenezer. The Sot-weed Factor Or, A Voyage To Maryland. A Satyr. In which Is describ’d, the laws, government, courts And constitutions of the country, and also the buildings, feasts, frolicks, entertainments And drunken humours of the inhabitants of that part of America. In burlesque verse, London: B. Bragg, 1708.

Connecticut. Public Records of the Colony of Connecticut [1636-1776], Hartford CT: Brown and Parsons, 1850-1890.

Coulter, Calvin Jr. The Virginia Merchant, Ph. D. dissertation, Princeton University, 1944.

Davis, Andrew McFarland. Currency and Banking in the Province of the Massachusetts Bay, New York: Augustus M. Kelley, 1970.

Douglass, William.“A Discourse concerning the Currencies of the British Plantations in America &c.” Boston, 1739, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol III, pp.307-356.

Easterby, James H. et. al. The Journal of the Commons House of Assembly, Columbia: Historical Commission of South Carolina, 1951-.

Elliot, Jonathan. The Funding System of the United States and of Great Britain, Washington, D.C.: Blair and River, 1845.

Elmer, Lucius Q. C., History of the Early Settlement and Progress of Cumberland Conty, New Jersey; and of the Currency of this and the Adjoining Colonies. Bridgeport, N.J.: George F. Nixon, Publisher, 1869.

Enquiry into the State of the Bills of Credit of the Province of the Massachusetts-Bay in New-England: In a Letter from a Gentleman in Boston to a Merchant in London. Boston, 1743/4, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol IV, pp.149-209.

Ernst, Joseph A. Money and Politics in America, 1755-1775, Chapel Hill, NC: University of North Carolina Press, 1973.

Ernst, Joseph A. “The Labourers Have been the Greatest Sufferers; the Truck System in Early Eighteenth-Century Massachusetts,” in Merchant Credit and Labour Strategies in Historical Perspective, Rosemary E. Ommer, ed., Frederickton, New Brunswick: Acadiensis Press, 1990.

Felt, Joseph B. Historical Account of Massachusetts Currency. New York: Burt Franklin, 1968, reprint of 1839 edition.

Ferguson, James E. “Currency Finance, An Interpretation of Colonial Monetary Practices,” William and Mary Quarterly, 10, no. 2 (April 1953): 153-180.

Fernow, Berthold. “Coins and Currency in New-York,” The Memorial History of New York, New York, 1893, vol. 4, pp. 297-343.

Fitch, Thomas. Thomas Fitch letter book, 1703-1711, MSS, American Antiquarian Society, Worcester, MA.

Forman, Benno M. “The Account Book of John Gould, Weaver, of Topsfield, Massachusetts,” Essex Institute Historical Collections, vol. 105, no. 1 (1969), pp. 36-49.

Franklin, Benjamin. “A Modest Enquiry into the Nature and Necessity of a Paper Currency,” Philadelphia, 1729, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. II, p. 340.

Franklin, Benjamin, The Papers of Benjamin Franklin, Leonard W. Labaree (ed.), New Haven, CT: Yale University Press, 1959.

Goldberg, Dror. “The Massachusetts Paper Money of 1690,” Journal of Economic History, vol. 69, no. 4 (2009), pp. 1092-1106.

Gottfried, Marion H. “The First Depression in Massachusetts,” New England Quarterly, vol. 9, no. 4 (1936), pp. 655-678.

Great Britain. Public Record Office. Calendar of State Papers, Colonial Series, London: Her Majesty’s Stationary Office, 44 vols., 1860-1969.

Grubb, Farley W. “Creating the U.S. Dollar Currency Union, 1748-1811: A Quest for Monetary Stability or a Usurpation of State Sovereignty for Personal Gain?” American Economic Review, vol. 93, no. 5 (2003), pp. 1778-98.

Grubb, Farley W. “The Circulating Medium of Exchange in Colonial Pennsylvania, 1729-1775: New Estimates of Monetary Composition, Performance, and Economic Growth,” Explorations in Economic History, vol. 41, no. 4 (2004), pp. 329-360.

Grubb, Farley W. “Theory, Evidence, and Belief—The Colonial Money Puzzle Revisited: Reply to Michener and Wright.” Econ Journal Watch, vol. 3, no. 1, (2006a), pp. 45-72.

Grubb, Farley W. “Benjamin Franklin and Colonial Money: A Reply to Michener and Wright—Yet Again” Econ Journal Watch, vol. 3, no. 3, (2006b), pp. 484-510.

Grubb, Farley W. “The Constitutional Creation of a Common Currency in the U.S.: Monetary Stabilization versus Merchant Rent Seeking.” In Lars Jonung and Jurgen Nautz, eds., Conflict Potentials in Monetary Unions, Stuttgart, Franz Steiner Verlag, 2007, pp. 19-50.

Hamilton, Alexander. Hamilton’s Itinerarium, Albert Bushnell (ed.), St. Louis, MO: William Bixby, 1907.

Hanson, Alexander C. Remarks on the proposed plan of an emission of paper, and on the means of effecting it, addressed to the citizens of Maryland, by Aristides, Annapolis: Frederick Green, 1787.

Hanson John R. II. “Money in the Colonial American Economy: An Extension,” Economic Inquiry, vol. 17 (April 1979), pp. 281-86.

Hanson John R. II. “Small Notes in the American Economy,” Explorations in Economic History, vol. 17 (1980), pp. 411-20.

Harper, Joel W. C. Scrip and other forms of local money, Ph. D. dissertation, University of Chicago, 1948.

Hart, Edward H. Almost a Hero: Andrew Elliot, the King’s Moneyman in New York, 1764-1776. Unionville, N.Y.: Royal Fireworks Press, 2005.

Hawley, Anna. “The Meaning of Absence: Household Inventories in Surry County, Virginia, 1690-1715,” in Peter Benes (ed.) Early American Probate Inventories, Dublin Seminar for New England Folklore: Annual Proceedings, 1987.

Hazard, Samuel et. al. (eds.). Pennsylvania Archives, Philadelphia: Joseph Severns, 1852.

Hemphill, John II. Virginia and the English Commercial System, 1689-1733, Ph. D. diss., Princeton University, 1964.

Horle, Craig et. al. (eds.). Lawmaking and Legislators in Pennsylvania: A Biographical Dictionary. Philadelphia: University of Pennsylvania Press, 1991-.

Horle, Craig et. al. (eds.). Lawmaking and Legislators in Pennsylvania: A Biographical Dictionary. Philadelphia: University of Pennsylvania Press, 1991-.

House of Lords. The Manuscripts of the House of Lords, 1706-1708, Vol. VII (New Series), London: His Majesty’s Stationery Office, 1921.

Hutchinson, Thomas. The History of the Province of Massachusetts Bay, Cambridge, MA: Harvard University Press, 1936.

Jones, Alice Hanson. Wealth Estimates for the American Middle Colonies, 1774, Ph.D. diss., University of Chicago, 1968.

Jones, Alice Hanson, Wealth of a Nation to Be, New York: Columbia University Press, 1980.

Jordan, Louis. John Hull, the Mint and the Economics of Massachusetts Coinage, Lebanon, NH: University Press of New England, 2002.

Judd, Sylvester. History of Hadley, Springfield, MA: H.R. Huntting & Co., 1905.

Kays, Thomas A. “When Cross Pistareens Cut their Way through the Tobacco Colonies,” The Colonial Newsletter, April 2001, pp. 2169-2199.

Kimber, Edward, Itinerant Observations in America, (Kevin J. Hayes, ed.), Newark, NJ: University of Delaware Press, 1998.

Knight, Sarah K. The Journal of Madam Knight, New York: Peter Smith, 1935.

Lester, Richard A. Monetary Experiments: Early American and Recent Scandinavian, New York: Augustus Kelley, 1970.

Letwin, William. “Monetary Practice and Theory of the North American Colonies during the 17th and 18th Centuries,” in Barbagli Bagnoli (ed.), La Moneta Nell’economia Europea, Secoli XIII-XVIII, Florence, Italy: Le Monnier, 1981, pp. 439-69.

Lincoln, Charles Z. The Colonial Laws of New York, Vol V., Albany: James B. Lyon, State Printer, 1894.

Lindert, Peter H. “An Algorithm for Probate Sampling,” Journal of Interdisciplinary History, vol. 11, (1981).

Lydon, James G. “Fish and Flour for Gold: Southern Europe and the Colonial American Balance of Payments,” Business History Review, 39 (Summer 1965), pp. 171-183.

Main, Gloria T. and Main, Jackson T. “Economic Growth and the Standard of Living in Southern New England, 1640-1774,” Journal of Economic History, vol. 48 (March 1988), pp. 27-46.

Manigault, Peter. “The Letterbook of Peter Manigault, 1763-1773,” Maurice A. Crouse (ed.), South Carolina Historical Magazine, Vol 70 #3 (July 1969), pp. 177-95.

Massachusetts. Courts (Hampshire Co.). Colonial justice in western Massachusetts, 1639-1702; the Pynchon court record, an original judges’ diary of the administration of justice in the Springfield courts in the Massachusetts Bay Colony. Edited by Joseph H. Smith. Cambridge: Harvard University Press, 1961.

Massey, J. Earl. “Early Money Substitutes,” in Eric P. Newman and Richard G. Doty (eds.), Studies on Money in Early America, New York: American Numismatic Society, 1976, pp. 15-24.

Mather, Cotton. “Some Considerations on the Bills of Credit now passing in New-England,” Boston, 1691, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. I, pp. 189-95.

McCallum, Bennett. “Money and Prices in Colonial America: A New Test of Competing Theories,” Journal of Political Economy, vol. 100 (1992), pp. 143-61,

McCusker, John J. Money and Exchange in Europe and America, 1600-1775: A Handbook, Williamsburg, VA: University of North Carolina Press, 1978.

McCusker, John J. and Menard, Russell R. The Economy of British America, 1607-1789, Chapel Hill, N.C.: University of North Carolina Press, 1985.

Michener, Ronald. “Fixed Exchange Rates and the Quantity Theory in Colonial America,” Carnegie-Rochester Conference Series on Public Policy, vol. 27 (1987), pp. 245-53.

Michener, Ron. “Backing Theories and the Currencies of the Eighteenth-Century America: A Comment,” Journal of Economic History, 48 (1988), pp. 682-92.

Michener, Ronald W. and Robert E. Wright. 2005. “State ‘Currencies’ and the Transition to the U.S. Dollar: Clarifying Some Confusions,” American Economic Review, vol. 95, no. 3 (2005), pp. 682-703.

Michener, Ronald W. and Robert E. Wright. 2006a. “Miscounting Money of Colonial America.” Econ Journal Watch, vol. 3, no. 1 (2006a), 4-44.

Michener, Ronald W. and Robert E. Wright. 2006b. “Development of the U.S. Monetary Union,” Financial History Review, vol. 13, no. 1 (2006b), pp. 19-41.

Michener, Ronald W. and Robert E. Wright. 2006c. “ Farley Grubb’s Noisy Evasions on Colonial Money: A Rejoinder ” Econ Journal Watch, vol. 3, no. 2 (2006c), pp. 1-24.

Morris, Lewis. The Papers of Lewis Morris,Eugene R. Sheridan, (ed.), Newark, NJ: New Jersey Historical Society, 1993.

Mossman Philip L. Money of the American Colonies and Confederation, New York: American Numismatic Society, 1993, pp. 105-142.

Nettels, Curtis P. “The Beginnings of Money in Connecticut,” Transactions of the Wisconsin Academy of Sciences, Arts, and Letters, vol. 23, (January 1928), pp. 1-28.

Nettels, Curtis P. The Money Supply of the American Colonies before 1720, Madison: University of Wisconsin Press, 1934.

Newman, Eric P. “Coinage for Colonial Virginia,” Numismatic Notes and Monographs, No. 135, New York: The American Numismatic Society, 1956.

Newman, Eric P. “American Circulation of English and Bungtown Halfpence,” in Eric P. Newman and Richard G. Doty (eds.) Studies on Money in Early America, New York: The American Numismatic Society, 1976, pp. 134-72.

Nicholas, Robert C. “Paper Money in Colonial Virginia,” The William and Mary Quarterly, vol. 20 (1912), pp. 227-262.

Officer, Lawrence C. “The Quantity Theory in New England, 1703-1749: New Data to Analyze an Old Question,” Explorations in Economic History, vol. 42, no. 1 (2005), pp. 101-121.

Patterson, Stephen Everett. Boston Merchants and the American Revolution to 1776, Masters thesis, University of Wisconsin, 1961.

Phillips, Henry. Historical Sketches of the Paper Currency of the American Colonies, original 1865, reprinted New York: Burt Franklin, 1969.

Plummer, Wilbur C. “Consumer Credit in Colonial Pennsylvania,” The Pennsylvania Magazine of History and Biography, LXVI (1942), pp. 385-409.

Ratchford, Benjamin U. American State Debts, Durham, N.C.: Duke University Press, 1941.

Reipublicæ, Amicus. “Trade and Commerce Inculcated; in a Discourse,” (1731). Reprinted in Andrew McFarland Davis, Colonial Currency Reprints, vol. 2, pp. 360-428.

Roberdeau, Daniel. David Roberdeau letter book, 1764-1771, MSS, Pennsylvania Historical Society, Philadelphia, PA.

Rosseau, Peter L. “Backing, the Quantity Theory, and the Transition to the U.S. Dollar, 1723-1850,” American Economic Review, vol. 97, no. 2 (2007), pp. 266-270.

Ruffhead, Owen. (ed.) The Statutes at Large, from the Magna Charta to the End of the last Parliament, 1761, 18 volumes., London: Mark Basket, 1763-1800.

Sachs, William S. The Business Outlook in the Northern Colonies, 1750-1775, Ph. D. Dissertation, Columbia University, 1957.

Schweitzer, Mary M. Custom and Contract: Household, Government, and the Economy in Colonial Pennsylvania, New York:Columbia University Press, 1987.

Schweitzer, Mary M. “State-Issued Currency and the Ratification of the U.S. Constitution,” Journal of Economic History, 49 (1989), pp. 311-22.

Shalhope, Robert E. A Tale of New England: the Diaries of Hiram Harwood, Vermont Farmer, 1810–1837, Baltimore: John Hopkins University Press, 2003.

Smith, Bruce. “American Colonial Monetary Regimes: The Failure of the Quantity Theory and Some Evidence in Favor of an Alternate View,” The Canadian Journal of Economics, 18 (1985a), pp. 531-64.

Smith, Bruce. “Some Colonial Evidence on Two Theories of Money: Maryland and the Carolinas, Journal of Political Economy, 93 (1985b), pp. 1178-1211.

Smith, Bruce. “The Relationship between Money and Prices: Some Historical Evidence Reconsidered,” Federal Reserve Bank of Minneapolis Quarterly Review, vol 12, no. 3 (1988), pp. 19-32.

Solomon, Raphael E. “Foreign Specie Coins in the American Colonies,”in Eric P. Newman (ed.), Studies on Money in Early America, New York: The American Numismatic Society, 1976, pp. 25-42.

Soltow, James H. The Economic Role of Williamsburg, Charlottesville, VA: University of Virginia Press, 1965.

South Carolina. Public Records of South Carolina, manuscript transcripts of the South Carolina material in the British Public Record office, at Historical Commission of South Carolina.

Stevens John A. Jr. Colonial Records of the New York Chamber of Commerce, 1768-1784, New York: John F. Trow & Co., 1867.

Sufro, Joel A. Boston in Massachusetts Politics 1730-1760, Ph.D. dissertation, University of Wisconsin, 1976.

Sumner, Scott. “Colonial Currency and the Quantity Theory of Money: A Critique of Smith’s Interpretation,” Journal of Economic History, 53 (1993), pp. 139-45.

Thayer, Theodore. “The Land Bank System in the American Colonies,” Journal of Economic History, vol. 13 (Spring 1953), pp. 145-59.

Vance, Hugh. An Inquiry into the Nature and Uses of Money, Boston, 1740, reprinted in Andrew McFarland Davis, Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. III, pp. 365-474.

Weeden, William B. Economic and Social History of New England, Boston, MA: Houghton, Mifflin, 1891.

Weiss, Roger. “Issues of Paper Money in the American Colonies, 1720-1774,” Journal of Economic History, 30 (1970), pp. 770-784.

West, Roger C. “Money in the Colonial American Economy,” Economic Inquiry, vol. 16 (1985), pp. 1-15.

Wharton, Francis. (ed.) The revolutionary diplomatic correspondence of the United States, Washington, D.C.: Government Printing office, 1889.

Whitaker, Benjamin. The Chief Justice’s Charge to the Grand Jury for the Body of this Province, Charlestown, South Carolina: Printed by Peter Timothy, 1741.

White, Phillip L. Beekman Mercantile Papers, 1746-1799, New York: New York Historical Society, 1956.

Whitehead, William A. et. al. (eds.). Documents relating to the colonial, revolutionary and post-revolutionary history of the State of New Jersey, Newark: Daily Advertising Printing House, 1880-1949.

Wicker, Elmus. “Colonial Monetary Standards Contrasted: Evidence from the Seven Years War,” Journal of Economic History, 45 (1985), pp. 869-84.

Winthrop, Wait. “Winthrop Papers,” Collections of the Massachusetts Historical Society, Series 6, Vol 5, Boston: Massachusetts Historical Society, 1892.

Wright, Robert E. Hamilton Unbound: Finance and the Creation of the American Republic, Westport, Connecticut: Greenwood Press, 2002.

Citation: Michener, Ron. “Money in the American Colonies”. EH.Net Encyclopedia, edited by Robert Whaples. June 8, 2003, revised January 13, 2011. URL http://eh.net/encyclopedia/money-in-the-american-colonies/