D. The Path to Utopia, 1945-1973 XIX. Present at the Creation



Download 142.83 Kb.
Page1/2
Date19.05.2016
Size142.83 Kb.
#53675
  1   2



    1. D. The Path to Utopia, 1945-1973

    2. -XIX. Present at the Creation-




    1. A. Political Philosophy



1. What would replace laissez faire?

The Great Depression and World War II together ended the idea of laissez faire: the idea that




  • the government should keep its hands off of the economy

  • limiting government to enforcement of contracts and protection of property rights would necessarily generate both rapid economic growth and an acceptable distribution of income

  • that government “intervention” into the economy beyond the role of a night-watchman state was guilty until proven innocent.

The idea of laissez faire had in fact been dying for a while. Adam Smith had been its greatest and most creative exponent, under the name of the “system of natural liberty.” But even he had made exceptions: exceptions for public works on a scale that private individuals were unlikely to take, exceptions for public education to counteract the stultifying effects of the division of labor, and most of all an exception for the British Navigation Acts—for “defense is more important than opulence.”1


The doctrine reached its apogee in Britain during the “hungry [18]40s,” when economists specialized in explaining how this or that government program—the provision of food to victims of Ireland’s potato famine, or restrictions on the length of the working day, or aid to unemployed handloom weavers—would inevitably backfire.2 But by the 1920s economists had become proficient at the analysis of external economies and natural monopolies, of situations in which laissez faire—the system of natural liberty—was not the way to reach the best of all possible worlds.
Thus the question, “one of the finest problems in legislation… to determine what the state ought to take upon itself to direct by the public wisdom, and what it ought to leave with as little interference as possible to individual exertion,”3 had to be “settle[d]… on its merits in detail,” for:
It is not true that individuals possess a prescriptive “natural liberty” in their economic activities. There is no “compact” conferring perpetual rights on those who have or on those who acquire. The world is not so governed from above that private and social interest always coincide. It is not so managed here below that in practice they coincide. It is not a correct deduction from the principles of economics that enlightened self-interest always operates in the public interest. Nor is it true that self-interest generally is enlightened; more often individuals acting separately to promote their own ends are too ignorant or too weak to attain even these…4
And by the late 1930s the case “on its merits in detail” for laissez faire—for having the government take a minimal role in the economy and leave it to the private sector to sort things out—seemed very, very weak indeed.
If you had told anyone before 1914 or even before 1930 that by the end of the 1930s laissez faire would have been dead as a doctrine for guiding economic policy, they would almost inevitably have jumped to the conclusion that the end of laissez faire meant the arrival of socialism: the end of the market economy, and its replacement by a socialized, militarized economy in which the government owned everything of value and directed who should go where and work on what.
Had not the years leading up to 1929 seen the increased monopolization and concentration of the economy? Were not the largest firms in 1929 bigger than whole economies had been half a century earlier? Did not major investment banking firms like J.P. Morgan and Company (in the U.S.), the Deutsche Bank (in Germany), or the Yasuda zaibatsu (in Japan) exercise a remarkable amount of command and control over the economy’s large-scale investment decisions?5 As Vladimir Lenin had written during World War I:
When a large enterprise… on the basis of exact computation of mass data, organizes according to plan the supply of primary raw materials to the extent of two-thirds or three-fourths of all that is necessary for tens of millions of people; when the raw materials are transported to the most suitable place of production, sometimes hundreds or thousands of miles away, in a systematic and organized manner; when a single center directs all the successive stages of work… then it becomes evident that we have socialization of production… that private economic relations and private property relations constitute a shell which is no longer suitable for its contents… [and] which will inevitably be removed…6
Lenin—and many others—saw socialism already half-built in the large organizations of vertically-integrated manufacturing firms and in the loose financial empires of bankers: the new economy in the womb of the old.
And there was no doubt by the end of the 1930s that laissez faire was dead. Its advocates had promised that the market economy could deliver. And in the 1930s it had not. All factions that sought to achieve political power after the Great Depression agreed on one thing: that the days of the night-watchman state and the gold standard were over, and that they would do a better job in the future than the order that had led to the Great Depression.
A hard laissez-faire right continued to exist, and continued to argue that once the government ventured outside the limited bounds of enforcement-of-contracts and protection-of-property that iron laws of power would push it to totalitarian extremes that would destroy all human liberty, leaving universal serfdom.7 Its proponents even argued that the Great Depression showed not the bankruptcy of laissez faire but the bankruptcy of government interference: the government, they argued, not any natural instability in the private sector had caused the Great Depression.8
But on closer observation the claim that the government had destabilized the economy in the 1930s turned out to be an almost Keynesian claim: a claim that an aggressive and well-informed central bank could have stabilized the economy in the1930s by carrying out massive stimulative open-market operations.9 And it was never clear just how the government had destabilized the economy and caused the Great Depression: was it through monetary policy that was overly expansionary in the late 1920s and early 1930s (as Hayek maintained) or through monetary policy that was overly contractionary in the late 1920s and early 1930s (as Friedman and Schwartz maintained)?

2. The Keynesian escape hatch

So what was to replace laissez faire?


Lenin had his answer. But this was not to be the case. What succeeded laissez faire was not socialism but something called the “mixed economy.”
John Maynard Keynes stands as a symbol of the new ideology, the ideology of a middle road: Keynesianism and the “mixed economy.”10 Keynesian emerged in the nick of time, had by the end of World War II become the dominant ideology in the world economy’s North Atlantic industrial core. It provided North America and western Europe with a Keynesian escape hatch from what had been insoluble crises and contradictions in the interwar period.11

The state socialist left did continue to exist. In the ideological battles of the generation after World War II its advocates acted as if they believed that the Keynesians, the compromisers, the ideologues of the “mixed economy” were in the process of stealing the future. They continued to argue that capitalism contained fatal and irresolvable contradictions that could not be forever avoided by Keynesian sleight-of-hand.12


Authors like Paul Sweezy would confidently predict socialism and government planning would deliver a more efficient allocation of productive forces and a faster rate of economic growth. Indeed, many who supported the Keynesian escape hatch and the mixed economy agreed: they feared that a planned economy like that of the Soviet Union could be superior as a social mechanism for producing economic growth, although not as one for producing human liberty, happiness, or high mass consumption.13

Nevertheless the Keynesian escape hatch did turn out to lead to solid ground between laissez faire on the one hand and some form of total state on the other. You could avoid the waves of mass unemployment and the extremes of relative wealth and poverty of the first without risking the dangers of the second.


One way to read Keynes’s General Theory is as a confident prediction that all that was needed in order to remove the deficiency of laissez faire were relatively minor reforms, and that such relatively minor reforms could successfully stabilize the economy with nearly-perpetual full employment.14 An activist welfare-state government with a commitment to full employment had the tools to level the distribution of income, eliminate Great Depressions, and could put economies back onto the road to utopia. If governments would only lower interest rates and spend money freely (without raising taxes) in times when total demand was low and raise interest rates and raise taxes (without spending) in times when total demand was high, Great Depressions could be avoided.
Even the monetarist critics of Keynesianism in the intellectual battles of the 1950s and 1960s seem, in retrospect, to have been selling a slightly different version of the same doctrine. Macroeconomic policy according to Milton Friedman had a more cautious (and I would argue more correct) view of the limitations of stabilization policy—how it could prevent big depressions but not smooth out small recessions.15 Milton Friedman eschewed fiscal policy as a stabilization tool—both because he believed it would not work, and because he believed that deficit spending had long-run dangers of its own.16 But at the bottom the message was the same: a government that intervened daily on as large a scale as necessary to keep private-sector decisions to send the money supply on wild gyrations could guard the economy against Great Depressions without succumbing to the trap of socialist ideals leading to an over-mighty government that would in the end destroy political liberty and economic prosperity

Those countries that had attempted to exit through the Keynesian escape hatch (largely by fortunate accident) during the Depression had indeed done relatively well. The Great Depression was relatively mild in countries that had devalued their currencies early, printed money and inflated their price levels, ensured low interest rates, and had run large budget deficits. World War II provided further proof: in the United States unemployment that had been called “structural” or “permanent” during the 1930s and had appeared immune to the self-adjusting forces of the market as well as to the entire armament of the New Deal vanished entirely in the 1940s under the pressure of vastly expanded government spending. The United States fought World War II without reducing the real value of civilian consumption: all U.S. war production came from new capacity, or from capacity that stood idle at the end of the 1930s.

In retrospect, the accidently Keynesian policies of moderate expansion, in?ation, and devaluation in the interwar period may have owed some of their success to the fact that they were the exception rather than the rule. To the extent that the background assumptions of gold standard discipline and deflation remained in force, accidently “Keynesian” policies would be more effective. But there was a catch-22: any strong attachment to gold-standard principles would ensure that Keynesian policies would not be systematically undertaken—for central bankers and politicians shared the same background assumption about how the world should work.
In later years once the workings of the Keynesian order became clear—in the second and third post-World War II generations—the tasks of macroeconomic management would prove harder, and the truth of the doctrines of Keynes’s disciples would become less clear. But in the first post-World War II generation the Keynesian escape hatch provided governments, polities, and economies with what seemed like a miraculous solution to all the interwar dilemmas. It was no accident that U.S. Secretary of State Dean Acheson titled his memoirs Present at the Creation, for he and his peers truly had been present at the creation of an extraordinarily fruitful framework of political and economic institutions

So how did it happen that the first post-World War II generation of governments found their way to adopting the policies that led through the Keynesian escape hatch? And why was Keynes so right—why were his policy recommendations so apt for the post-World War II world?



B. The Coming of the Keynesian Order




1. The shadow of the past
A first, very important factor helping to make post-World War II economic reconstruction a success was the shadow of the past. Post-World War II reconstruction took place against the background catastrophe of World War II and of the preceding Great Depression.

The political and economic struggle between parties and classes in interwar Europe had ended in the mutual ruin of the contending parties. Right-wing factions had wanted low wages, no welfare state, stable prices (along with social order and nationalist self-assertion); left-wing factions had wanted high wages and an extensive welfare state. The far left had no tolerance for the near left. Mainline politicians in the interwar period, whether social democrats looking forward to the implementation of the socialist Gotha Program or Clause IV, or right-wing politicians interested in demolishing the embryonic welfare state and restoring traditional authorities, had looked forward to establishing their vision of the distribution of wealth and the role of the government by overrunning opposition—at the ballot box if possible, and through street violence and purges if necessary. The end of this political and economic struggle had been the rise of fascism and Nazism, which had benefitted no one.

The magnitude of Depression-era unemployment also shifted politicians’, industrialists’, and bankers’ beliefs about the key goals of economic policy. Before the Depression a stable currency and exchange rate were key. But after the Depression even the bankers recognized that a high overall level of employment was more important than avoiding inflation: universal bankruptcy and mass unemployment were bad for workers, but they were worse for capitalists and bankers.

Thus entepreneurs, the owners and managers of real capital—industry—and even the bankers found that they gained, not lost, from a commitment to maintain high employment first. High employment meant high capacity utilization. Rather than seeing tight labor markets erode pro?t margins by raising wages, owners of property saw high demand spread ?xed costs out over more commodities and so increase pro?tability.

There is a sense in which Christian and social democracy, the twin political powers of the post-World War II world, evaded class-con?ict based dilemmas of the interwar and pre-World War I politics because the shock of the Great Depression shifted politics from a concern over redistribution to a concern over production. All would lose heavily from another Great Depression. It seemed much more worthwhile to compromise, and to pursue policies that would enlarge the pie to be distributed rather than for either side—either the left or the right—to engage in substantial redistribution. For all parties the post-World War II mission became, in Charles Maier’s words:
one of expanding aggregate economic performance and eliminating poverty by enriching everyone, not one of redressing the balance among economic classes or political parties. The true dialectic was not one of class against class, but waste versus abundance.17
It is very hard to argue that accepting the “mixed economy” was a mistake for anyone. Had either owners or workers tried to hold out for more—as they did in the interwar period—they might well have ended up with far less. How far down must one go in the income distribution to ?nd citizens of the United States or West Germany who are worse off today, in a material sense, than the average citizen of Czechoslovakia?

2. The Keynesian settlement: the United States

In America, the 1946 Employment Act declared that it was the “continuing policy and responsibility” of the federal government to “coordinate and utilize all its plans, functions, and resources... to foster and promote free competitive enterprise and the general welfare; conditions under which there will be afforded useful employment for those able, willing, and seeking to work; and to promote maximum employment, production, and purchasing power”. Laws that establish goals can and do serve as markers of changes in opinions, perceptions, and aims. When people then speak of the effects of such a law, in many cases they are using “the law” as a shorthand marker to describe changes in the hearts and the minds of the people. Whether the goal is achieved or pursuit of the goal effects and constrains public policy depends on the depth of the change in hearts and minds.

This law committed the federal government to the business of macroeconomic management.

The largest shift in policy marked by the 1946 Employment Act is the post-World War practice of allowing the government’s fiscal automatic stabilizers to function. Not since the Great Depression have mainstream legislators or opinion leaders called for fiscal austerity—for raising taxes or cutting government spending—in the midst of recession. The argument that the federal deficit in time of recession is “cyclical” and that steps to reduce it would aggravate the recession has been an effective trump in public policy debates, and has kept policy makers from seeking budget balance when unemployment is high. As a result the federal government’s budget slides into deeper deficit in recessions, and moves toward balance or into surplus when the economy expands.



[Figure: U.S. automatic stabilizers]



The gap between this calm acceptance of automatic stabilizers and cyclical fiscal deficits and pre-WWII attitudes is very large. Recall that Franklin Roosevelt made Herbert Hoover’s failure to balance the federal budget in 1932 an issue in the 1932 presidential election. Or consider Joseph Schumpeter (in Brown, 1934), writing from Harvard in the middle of the Great Depression that there was a:

presumption against remedial measures [because] policies of this class are particularly apt to produce additional trouble for the future.... [For depressions are] not simply evils, which we might attempt to suppress, but forms of something which has to be done, namely, adjustment to change... [and] most of what would be effective in remedying a depression would be equally effective in preventing this adjustment...

The shift in the cyclical behavior of the federal budget, considered as a sea-anchor for the economy’s level of total spending, is impressive. A good deal of this increase comes from the increase in the size of the government as a share of national product. The pre-Depression government taxed and spent at most 5 percent, and more typically 2 of national product in peacetime. The Depression-era government taxed 5 to 7 percent and spent 8 to 10 percent of national product. The post-WWII federal government taxes and spends one-sixth or more of national product in peacetime.





With a large federal government, automatic stabilizers quickly become very important. In the post-World War II period, a 1 percentage-point increase in the U.S. unemployment rate has typically been associated with an 0.9 percentage-point increase in the federal deficit measured as a share of national product. About one dollar in three lost from private-side spending during a recession is made up by expanded government demand: decreased taxes and increased social welfare spending. Such automatic stabilizers are large enough to reduce by four-ninths the change in employment and GDP as a result of any negative economic shock. The coming to maturity of automatic stabilizers—or, rather, the blocking of previous moves toward budget-balance in recession that had been mandated by gold-standard ideology—has been a very important factor helping to stabilize the American economy since World War II.

The post-Great Depression settlement in the United States included a place for labor unions. In 1919, union membership in America was some 5 million. It fell to a trough of perhaps 3 million by Roosevelt’s inauguration in 1933, grew to 9 million by the end of 1941, and took advantage of the tight labor market of World War II to grow to some 17 million or so by the inauguration of Eisenhower in 1933. Before the mid-1930s there were very few employers who were not strongly opposed to unions: employers would compile and circulate “blacklists” of union organizers, hire permanent replacements for striking workers, refuse to hire known union workers, and refuse to negotiate with unions.

From 1933 to 1937 organizing unions became easier—in spite of high unemployment—because of the solid swing of the political system to the Democrats. The federal government was no longer an anti-, but a pro-union force. The federal government passed the Wagner Act, which gave workers the right to engage in collective bargaining. A National Labor Relations Board monitored and greatly limited the ability of anti-union employers to punish union organizers and members, which the post-World War II Taft-Hartley Act did not reverse. Employers in large mass-production industries learned to value the mediation between bosses and employees that could be provided by unions. And workers learned to value the above-market wages a union shop could negotiate.



[Figure: Union strength in the U.S.]

The third component of the post-World War II Keynesian settlement in the United States was the welfare, or social insurance, state. From a western European perspective the American social insurance state was anemic. Unemployment insurance provisions were low: Americans who lost their jobs and who were covered by the system were offered approximately half-pay for approximately half a year, a system about a third as generous as European systems. America’s Social Security system for retirees is roughly analogous to western European systems. But American free public education stops after high school, and American state-funded medical care did not exist at all until the 1960s, and even today the non-poor and non-elderly are excluded from Medicare and Medicaid. A typical British conservative like Margaret Thatcher would find the absence of state-sponsored medical care in the United States to be somewhat appalling, and not what one would expect from a leading civilization.

Means-tested programs for the poor also turned out to be significantly less generous in the United States than in western Europe: the American social insurance state did less leveling than did the European. Food stamps to subsidize diet, Aid to Families with Dependent Children to provide single mothers with some cash, and a small and rationed amount of low-quality public housing made up America’s effort to give the poor additional purchasing power in the first post-World War II generation.

[Figure: Relative sizes of post-WWII welfare states]

How was it that the United States moved toward social democracy as a result of the Great Depression and World War II, but moved toward it only perhaps half as far as the democracies of western Europe, of Canada, Australia, and New Zealand?

From the perspective of what had existed before, however, the net impact of Roosevelt’s New Deal, Truman’s Fair Deal, and in the 1960s Lyndon Johnson’s Great Society carried the United States a long way toward social democracy: a society in which the government makes it its business to moderate the inequalities in income and in life-chances produced by the private market economy.

And the U.S. administrations of Franklin D. Roosevelt and Harry S Truman much did more than establish the post-World War II Keynesian settlement in the United States. Through a wide variety of means, they helped establish an analogous Keynesian settlement in western Europe. Post-World War II relief, offers of military cooperation and support against potential Soviet expansion, large-scale loans, and access to U.S. markets for European exports were all made available to western European countries that shaped their post-World War II policies in ways that gave the U.S. administration confidence.


The flagship of U.S.-funded post-World War II support programs was the Marshall Plan—named not for the U.S. president, Harry S Truman, but for his Secretary of State George C. Marshall, who had been chief of staff of the U.S. army during World War II: for as Truman, said in October 1947, “Can you imagine [the plan’s] chances of passage in an election year in a Republican [majority] congress if it is named for Truman and not Marshall?”


3. The Keynesian settlement: Western Europe
[T]he world of suffering people looks to us for leadership. Their thoughts, however, are not concentrated alone on this problem. They have more immediate and terribly pressing concerns where the mouthful of food will come from, where they will find shelter tonight, and where they will find warmth. Along with the great problem of maintaining the peace we must solve the problem of the pittance of food, of clothing and coal and homes. Neither of these problems can be solved alone.
—George C. Marshall, November 1945

The post-World War II reconstruction of the economies and polities of Western Europe was an extraordinary success. Growth was fast, distributional conflicts in large part finessed, and international trade boomed. The stability of representative democracies in Western Europe made its political institutions the envy of much of the world. The politicians who in the post- World War II years laid the foundations of the postwar order had good warrant to be proud. They were, as Truman’s Secretary of State Dean Acheson put it in the title of his memoirs, Present at the Creation of an extraordinarily successful set of political and economic institutions.

A sizeable—but not overwhelming—part of the credit for Europe’s successful post-WWII reconstruction belongs to acts of statesmanship: the Marshall Plan and other initiatives that sped Western European growth by altering the environment in which political and economic policy was made. In the immediate aftermath of World War II politicians who recalled the disasters of the Great Depression were ill-disposed to “trust the market.” They were eager to embrace regulation and government control if only to show that they would not fall into the trap that their predecessors had fallen into during the Great Depression. It is possible that European political economy could have taken a different turn, in which case post-World War II European recovery might have been hobbled by clumsy allocative bureaucracies that rationed scarce foreign exchange and placed ceiling prices on exportables to protect the consumption of urban working classes.

Yet in reality the Marshall Plan era saw the creation of the social-democratic “mixed economy”: the restoration of price freedom and exchange rate stability, and the reliance on market forces within a context of a large social insurance state, some public ownership of industry and utilities, and a great deal of public demand management.

To some degree the creation of the social-democratic mixed economy came about because and no one in Europe wanted a repeat of interwar experience; to some degree it came about because the governments in power were Christian democratic and social democratic rather than socialist. They believed that the “mixed economies” they were building should have a strong pro-market orientation. For such governments Marshall Plan and other aid gave them room to maneuver—without such aid, they would have soon faced a harsh choice between contraction to balance their international payments and severe controls on imports.

To some degree the creation of the social-democratic mixed economy came about because Marshall Plan administrators and others pressured European governments to decontrol and liberalize their economies in a more “American” mold even when they wished to do otherwise.

Without the Marshall Plan, it is at least conceivable that the pattern of post-World War II European political economy might well have resembled the overregulation and relative economic stagnation of post-World War II Argentina, a nation that has dropped from First to Third World status in two generations. It is at least conceivable that post-World War II Europe might have replicated the financial instability-alternate episodes of inflation and deflation-experienced by much of Europe in the 1920s, as interest groups and social classes bitterly struggled over the distribution of wealth and in the process stalled economic growth.
In the immediate aftermath of World War II, it was not clear that western Europe would utilize market mechanisms to coordinate economic activity. Belief in the ability of the market to coordinate economic activity and support economic growth had been severely shaken by the Great Depression. Wartime controls and plans, while implemented as extraordinary measures for extraordinary times, had created a governmental habit of control and regulation. Seduced by the very high economic growth rates reported by Stalin’s Soviet Union and awed by its war effort, many expected centrally-planned economies to reconstruct faster and grow more rapidly than market economies.

Memory of the Great Depression was fresh, and countries relying on the market were seen as likely to lapse into a period of underemployment and stagnation. A not uncommon judgment was that, in the words of Paul Sweezy, “the socialist sector of the world would [after World War II] quickly stabilize itself and push forward to higher standards of living, while the imperialist sector would flounder in difficulties”: history was expected to dramatically reveal the superiority of central planning.

British historian A.J.P. Taylor spoke in 1945 of how “nobody in Europe believes in the American way of life—that is, in private enterprise; or rather those who believe in it are a defeated party—a party which seems to have no more future.”

Moreover, it seemed at least an even bet that the United States would withdraw from Western Europe. The U.S. government had done so after World War I, when the cycles of U.S. politics had led to the erosion of the internationalist Wilson administration and the rise to dominance of a Republican isolationist Congress. The same pattern appeared likely after World War II: Republican Congressional leader Robert Taft, the dominant figure in the Senate after the election of 1946, was extremely isolationist in temperament. By all indications, the American commitment to relief and reconstruction was limited. The Truman administration was internationalist, but weak. Congressional critics called for balanced budgets. The 1946 Congressional elections were a disaster for the Democratic Party.

Considerable economic aid had been extended to Europe from the U.S. after World War I, first by the Herbert Hoover-led relief and reconstruction effort and then by private capital speculating on a restoration of monetary stability. Post-World War I reconstruction loans had been sold as sound private investments. They did not turn out to be so. Seymour Harris calculated that in present value terms nearly half of American private investments in Europe between the wars had been lost. Once burned, twice shy. With strong Communist parties in Italy and France, a nationalization-minded Labour government in Britain, and a Germany once again pressed for reparations transfers, capital flows from American investors gambling on European recovery and political stability seemed unlikely.

Nevertheless, within two years after the end of the war it became U.S. government policy to build up Western Europe politically, economically, and militarily. The Truman Doctrine inaugurated the policy of “containment” of the Soviet Union. Included in the Doctrine was a declaration that containment required steps to quickly regenerate economic prosperity in Western Europe. And as columnist Richard Strout wrote, “one way of combating Communism is to give western Europe a full dinner pail.”

Employing Secretary of State George C. Marshall’s reputation as the architect of military victory in World War II, conservative fears of the further extension of Stalin’s empire, and a political alliance with influential Republican Senator Arthur Vandenberg, Truman and his administration outflanked isolationist and anti-spending opposition and maneuvered first the Truman Doctrine, then the Marshall Plan, and then an open-ended commitment through NATO to the defense of Europe through Congress.

4. The Marshall plan

 

 

In the first two post-World War II years the U.S. contributed about four billion dollars a year to relief and reconstruction through UNRRA and other programs. The Marshall Plan continued these flows at comparable rates and was a multi-year commitment. From 1948 to 1951, the U.S. contributed $13.2 billion to European recovery. $3.2 billion went to the United Kingdom, $2.7 billion to France, $1.5 billion to Italy, and $1.4 billion to the Western-occupied zones of Germany that would become the post-World War II Bundesrepublik.



Two years after the end of the war, coal production in Western Europe was still below levels reached before or during the war. German coal production in 1947 proceeded at little more than half of the pre-World War II pace. Dutch and Belgian production was 20 percent below, and British 10 percent below, pre-World War II 1938 levels. Demands for coal for heating reduced the continent’s capacity to produce energy for industry. During the cold winter of 1946-47 coal earmarked for industrial uses had to be diverted to heating. Coal shortages led to the shutdown of perhaps a fifth of Britain’s coal-burning and electricity-using industry in February 1947.

Western European industrial production in 1946 was only 60 percent, and in 1947 only 70 percent, of the pre-World War II norm.

Western Europe in 1946-47 had four-fifths its 1938 supply of food. Its population had increased by twenty million—more than a tenth—even after accounting for military and civilian deaths. Traditionally, Western Europe had exported industrial and imported agricultural goods from Eastern Europe, the Far East, and the Americas. Now there was little prospect of rapidly restoring this international division of labor. Eastern European nations adopted Russian-style central planning and looked to the Soviet Union for economic links. Industry in the United States and Latin America had expanded during the war to fill the void created by the cessation of Europe’s exports. Imports of food and consumer goods for relief diverted hard currency from purchases of capital goods needed for long-term reconstruction.

Changes in net overseas asset positions reduced Western Europe’s annual earnings from net investments abroad. Britain had liquidated its entire overseas portfolio in order to finance imports during the war. The reduction in invisible earnings reduced Western Europe’s capacity to import by approximately 30 percent of 1938 imports. The movement of the terms of trade against Western Europe gave it in 1947-48 32 percent fewer imports for export volumes themselves running 10 percent below pre-World War II levels; higher export volumes might worsen the terms of trade further. The net effect of the inward shift in demand for exports and the collapse of the net investment position was to give Europe in 1947-8 only 40 percent of the capacity to import that it had possessed in 1938.

By contrast, after World War I Europe’s external position had been much more favorable.

Thus Europe after World War II was in worse economic shape than it had been after World War I. Another episode of financial and political chaos like that which had plagued the Continent following World War I appeared likely. U.S. State Department officials wondered whether Europe might be dying—like a wounded soldier who bleeds to death after the fighting. State Department memoranda in 1946-7 presented an apocalyptic vision of a complete breakdown in Europe of the division of labor-between city and country, industry and agriculture, and between different industries themselves.

A Communist political triumph was seen as a definite possibility.

Yet the pace of post-World War II recovery soon surpassed that which followed World War I. As figure 4 shows, by 1949 national income per capita in Britain, France, and Germany had recovered to within a hair of pre-war levels.

By 1951, six years after the war and at the effective end of the Marshall Plan, national incomes per capita were more than 10 percent above pre-war levels. Measured by the yardstick of the admittedly imperfect national product estimates, the three major economies of Western Europe had achieved a degree of recovery that post-World War I Europe had not reached in the 11 years separating World War I from the Great Depression.

Post-World War II recovery dominated post-World War I recovery by other economic indicators as well: in steel, cement, and coal production. The recovery of coal production after World War II also outran its post-World War I pace by a substantial margin, even though coal was seen as in notoriously short supply in the post-World War II years. By contrast, the recovery of coal production after World War I was erratic. Coal production declined from 1920 to 1921—falling from 83 percent of pre-World War I levels in 1920 to 72 percent in 1921—as a result of the deflation imposed on the European economy by central banks that sought the restoration of pre-World War I gold standard parities, accepted the burden of deflation, and allowed the 1921 recession in the United States to be transmitted to their own countries.


After World War II, no central bank or government pursued monetary orthodoxy so aggressively to roll back price and wage increases and preserve the real wealth of rentiers. Coal production fell again in 1923-1924, when the French army occupied Germany’s Ruhr valley because reparations were not being delivered fast enough. And coal production fell in 1925-26, when the aftermath of Britain’s return to gold put heavy pressure to lower wages on Britain’s coal producers, and triggered first a coal and then a brief general strike.

Thus the major factors hindering a rapid post-World War I recovery were not strictly economic but social and political. Post-World War I Europe saw the recovery of output repeatedly interrupted by political and economic “wars of attrition” between contendng classes and interests. In the aftermath of World War I, the distribution of wealth both within and between nations, the question of who would bear the burden of postwar adjustment, and the degree to which government would act to secure the property of the rentier were all unresolved issues. Social classes, political factions, and nation-states saw that they had much to lose if they did not aggressively promote their claims for favorable redistribution.

After World War II such “wars of attrition” were less virulent. Memories of the disastrous consequences of the aggressive pursuit of redistributional goals during the interwar period made moderation appear more attractive to all. The availability of Marshall Plan aid to nations that had accomplished stabilization provided a very strong incentive to compromise such distributional conflicts early, and gave European countries a pool of resources that could be used to cushion the wealth losses sustained in restructuring.

Post-World War II reconstruction did more than return western Europe to its previous growth path. French, Italian, Low Countries, and West German growth during the post-World War II boom raised national product per capita at rates that far exceeded pre-World War II, pre-1929, or even pre-1913 trends. The reconstruction after World War II appears to have created economies capable of dynamic economic growth an order of magnitude stronger than had previously been seen in Europe. Postwar Europe saw “supergrowth,” as Charles Kindleberger has termed it.

Yet a Latin American country like Argentina, as rich in the years before and immediately after World War II as industrial Western Europe, grew slowly even under the post-World War II expansionary Bretton Woods regime. Fast post-World War II growth and catchup to American standards of productivity were to a large degree specific to Western Europe, and thus to the countries that received Marshall Plan aid.

Marshall Plan dollars did affect the level of investment: countries that received large amounts of Marshall Plan aid invested more. Eichengreen and Uzan (1991) calculate that out of each dollar of Marshall Plan aid some 65 cents went to increased consumption and 35 cents to increased investment. The returns to new investment were high. Eichengreen and Uzan’s analysis suggests that social returns may have been as high as 50 percent a year: an extra dollar of investment raised national product by 50 cents in the subsequent year.



Another channel through which Marshall Plan aid stimulated growth was by relaxing foreign exchange constraints. Marshall Plan funds were hard currency in a dollar-scarce world. After the war, coal, cotton, petroleum, and other materials were in short supply. The Marshall Plan allowed them to be purchased at a higher rate than would have been possible otherwise. Marshall Plan dollars added to Europe’s international liquidity and played an important role in restoring intra-European trade.
The judgment that the Marshall Plan—$13 billion of late-1940s dollars provided to Europe as a grant without strings attached—helped economic recovery is not shared by all. It certainly allowed European countries to ignore their—very large—balance-of-payments deficits vis-à-vis the United States and continue the fiscal and monetary policies that supported rapid growth.18 But should such fiscal and monetary policies, and the accompanying very large trade deficits which quickly acquired the name of the “dollar shortage”, been allowed to persist? The tone, at least, of Alan Milward’s Reconstruction of Western Europe implies that they should not have been allowed to persist: that American generosity allowed European governments to avoid recognizing reality for a while, and thus to live beyond their means on their wits. The remnants of the old laissez-faire right preferred policies of substantial devaluation to boost European exports, coupled with monetary austerity to tighten Europe’s belt and reduce imports; this, they thought, would have been vastly preferable to using American taxpayers’ hard-earned money to support socialist experiments in western Europe.19
As Eichengreen assesses this debate:
We will never know whether the rapid dismantling of controls on current-account transactions would have boosted European exports sufficiently to eliminate the dollar shortage. For instead of removing them, Western European countries maintained and, in some cases, elaborated their wartime restrictions…. [Removing controls] wouldhave entailed a substantial depreciation of exchange rates to render their exports more competitive internationally. Governments resisted this on the grounds that it would have worsened the terms of trade and lowered living standards…. A substantial worsening of the terms of trade and decline in living standards threatened to provoke labor unrest and disrupt the recovery process…. If governments had not attached priority to sustaining investment [and growth], the external constraint would not have bound so tightly.20
In my judgment the laissez-faire road of fiscal and monetary austerity, slower growth, and lower living standards through devalued exchange rates in post-World War II Europe was closed. Even if such policies might have worked in the sense of being compatible with political stability and the continued reduction in communist influence in western Europe, they would not have worked because the United States would not have allowed the large-scale purchase of the extra exports that the U.S. would try to ship it. The one point at which the post-World War II U.S. Congress balked at the internationalist commitments of the Truman administration was in its refusal to ratify the Havana Charter of the International Trade Organization—the organization that was supposed to manage the move to free trade in ther aftermath of World War II. Too many protectionists opposed moving too far, too fast toward free trade.
Yet the alternative strategy proposed by critics of the Marshall Plan assumes—contrary to fact—that the U.S. would have been willing and eager to be an importer of last resort of commodities from Europe in the years immediately after World War II.


5. Pro-market policies
Renewed growth required, in addition to financial stability, the free play of market forces. Though there was support for the restoration of a market economy in Western Europe, it was far from universal. Wartime controls were viewed as exceptional policies for exceptional times, but it was not clear what was to replace them. Communist and some Socialist ministers opposed a return to the market. It was not clear when, or even if, the transition would take place.

On this issue the Marshall Plan left Western Europeans with no choice. Each recipient had to sign a bilateral pact with the United States. Countries had to agree to balance government budgets, restore internal financial stability, and stabilize exchange rates at realistic levels. Marshall plan aid was available only if Europe was committed to the “mixed economy” with the market playing a large part in the mix.

The demand that European governments trust the market came from the highest levels of the Marshall Plan administration. Secretary of State Dean Acheson described the chief Marshall Plan administrator, Paul Hoffman, as an “economic Savonarola.” Acheson describes watching Hoffman “preach his doctrine of salvation by exports” to British Foreign Secretary Ernest Bevin. “I have heard it said,” wrote Acheson, “that Paul Hoffman missed his calling: that he should have been an evangelist. Both parts of the statement miss the mark. He did not miss his calling, and he was and is an evangelist.”

Post-World War II Europe was very far from laissez faire. Government ownership of utilities and heavy industry was substantial. Government redistributions of income were large. The magnitude of the “safety nets” and social insurance programs provided by the post-World War II welfare states were far beyond anything that had been thought possible before World War I. But these large welfare states were accompanied by financial stability, and by substantial reliance on market processes for allocation and exchange.



6. What if?
A live possibility in the absence of the Marshall Plan was that governments would not stand aside and allow the market system to do its job. In the wake of the Great Depression, many still recalled the disastrous outcome of the laissez-faire policies then in effect. Politicians were predisposed toward intervention and regulation: no matter how damaging “government failure” might be to the economy, it had to be better than the “market failure” of the Depression. Had European political economy taken a different turn, post-World War II European recovery might have been stagnant. Governments might have been slow to dismantle wartime allocation controls, and so have severely constrained the market mechanism.

An alternative scenario would have seen the maintenance and expansion of wartime controls in order to guard against substantial shifts in income distribution. The late 1940s and early 1950s might have seen the creation in Western Europe of allocative bureaucracies to ration scarce foreign exchange, and the imposition of price controls on exportables in order to protect the living standards of urban working classes—as happened in Latin America, which nearly stagnated in the two decades after World War II.

In contrast to post-WWII Europe, the experience of post-WWII Latin America—especially Argentina—is also one of relative economic failure. Before the war, Argentina had been as rich as Continental Europe. In 1913 Buenos Aires was among the top 20 cities of the world in telephones per capita. In 1929 Argentina had been perhaps fourth in density of motor vehicles per capita, with approximately the same number of vehicles per person as France or Germany. Argentina from 1870≠1950 was a country in the same class as Canada or Australia. Yet after World War II, Argentina grew very much more slowly than France or Germany, rapidly falling from the ranks of the First World to the Third. Features of the international environment affecting Argentina as well as Europe—the rapid growth of world trade under the Bretton Woods system, for example—do not explain the latter’s singular stability and rapid growth.

In the absence of the Marshall Plan, might have Western Europe followed a similar trajectory? In Carlos Díaz-Alejandro’s estimation, four factors set the stage for Argentina’s relative decline: a politically-active and militant urban industrial working class, economic nationalism, sharp divisions between traditional elites and poorer strata, and a government used to exercising control over goods allocation that viewed the price system as a tool for redistributing wealth rather than for regulating the pattern of economic activity.

From the perspective of 1947, the political economy of Western Europe would lead one to think that it was at least as vulnerable as Argentina to economic stagnation induced by populist overregulation.

The war had given Europe more experience than Argentina with economic planning and rationing. Militant urban working classes calling for wealth redistribution voted in such numbers as to make Communists plausibly part of a permanent ruling political coalition in France and Italy. Economic nationalism had been nurtured by a decade and a half of Depression, autarky and war. European political parties had been divided substantially along economic class lines for two generations.

Yet Europe avoided this trap. After World War II Western Europe’s mixed economies built substantial redistributional systems, but they were built on top of and not as replacements for market allocations of goods and factors. Just as post-World War II Western Europe saw the avoidance of the political-economic “wars of attrition” that had put a brake on post-World War I European recovery, so post-World War II Western Europe avoided the tight web of controls that kept post-World War II Argentina from being able to adjust and grow.

Financial instability was pervasive in post-World War II Europe. Governments responded to inflation by retaining controls, prompting the growth of black markets. The post-World War II food shortage reflected not merely bad weather in 1947 but the reluctance of farmers to deliver food to cities. Moreover, manufactured goods farmers might have purchased remained in short supply. Manufacturing enterprises had the same incentive to hoard inventories. As long as food shortages persisted, workers had little ability—or incentive—to devote their full effort to market work.

The market-oriented solution to the crisis was straightforward. Prices had to be decontrolled to coax producers to bring their goods to market. Inflation had to be halted for the price mechanism to operate smoothly and to encourage saving and initiative. Budgets had to be balanced to remove inflationary pressure. With financial stability restored and market forces given free reign, individuals could direct their attention to market work.

For budgets to be balanced and inflation to be halted, however, political compromise was required. Consumers had to accept higher posted prices for foodstuffs and necessities. Workers had to moderate their demands for higher wages. Owners had to moderate demands for profits. Taxpayers had to shoulder additional liabilities. There had to be broad agreement on a “fair” distribution of income, or at least on a distribution of the burdens that was not so unfair as to be intolerable. Only then could pressure on central banks to continually monetize budget deficits and cause either explicit or repressed inflation be removed.


Such a “social contract” is advantageous only if it is generally accepted. If workers continued to aggressively press for higher wages, management had little incentive to plow back profits in return for the promise of higher future profits. If management failed to plow back profits, workers had little incentive to moderate current wage demands in return for higher future productivity and compensation. If labor relations were conflictual rather than harmonious, productivity would be the casualty. The post-WWII reconstruction shifted Europe onto this “social contract” equilibrium path, for once workers and management began coordinating on the superior equilibrium they had no obvious reason to stop—at least not until the mid 1970s.

U.S. aid policy after WWII encouraged European governments to pursue investment-friendly policies. Productivity soared in the wake of financial stabilization and the advent of the Marshall Plan. The advantages of the cooperative equilibrium were suddenly clear. Within the group of reconstructing nations, those where the United States had most leverage had the fastest-growing economies. Within Europe United States influence was strongest in Germany, weaker in France and Italy, and weakest in Britain. In the post-World War II period the German economy was the most successful, the British economy least. And Japan, where General MacArthur was America’s proconsul, is the extreme example that proves the rule.

To make the point another way, a social revolution to reduce the excess share of income received the top 20% by two-thirds—from 50% to 30% of national income—would raise average income of the bottom eighty percent by about two-?fths. This difference is smaller than the difference between the U.S. and France, between Canada and Australia, or between Venezuela and Brazil. This is a difference that is made up in twenty years of economic growth at the post-World War II pace. Thus the question:
is upsetting the political and economic order of the industrial west worthwhile in order to attain a given level of wealth for the median citizen twenty years early? And can the order be transformed without losing more than 20 years’ of growth in the uproar?
The social democratic answer since World War II has been that it is probably not worthwhile. Given the track record of the revolutions that have occurred in the twentieth century, this position appears crrect. Certainly communist revolutions have established régimes that have destroyed far more wealth than the property-owning strata at the top of industrial market economies have exacted. Overturning the political and economic structure of the industrial west, and replacing it with a large bureaucracy, can seem a possibly worthwhile goal if market capitalism cannot deliver economic growth. Not otherwise.

Here the Marshall Plan may have played a critical role. It did not obviate the need for sacrifice. But it increased the size of the pie available for division among interest groups. Two-and-a-half percent—Marshall Plan aid as a share of recipient GDP—was not an overwhelmingly large change in the size of the pie. But if the sum of notional demands exceeded aggregate supply by five or seven-and-a-half percent, Marshall Plan transfers could reduce the sacrifices required of competing distributional interests by a third or as much as a half.

Moreover, its potential availability if the government’s stabilization plan met the criteria required by Plan administrators provided a powerful incentive for governments to impose financial discipline. With Marshall Plan aid available, the benefits for quick resolution of “wars of attrition” were greater, and so the Plan in all likelihood advanced the date of financial stabilization. While internal price stabilization after World War II took four years, the German hyperinflation took place in the sixth year after the end of World War I, and France’s post-World War I inflation lasted for eight years.



    1. Download 142.83 Kb.

      Share with your friends:
  1   2




The database is protected by copyright ©essaydocs.org 2022
send message

    Main page