Democracy and Development:
Their Enemies, Their Hopes
The concepts of democracy and development are closely related in many respects. One is that they have a common enemy: loss of sovereignty. In the contemporary world of state capitalist nation-states—a crucial qualification—loss of sovereignty can entail decline of democracy, and decline in ability to conduct social and economic policy and to integrate on one’s own terms into international markets. That in turn harms development, an expected conclusion that is well confirmed by centuries of economic history. The same historical record reveals that fairly consistently, loss of sovereignty leads to imposed liberalization, of course in the interests of those with the power to institute this social and economic regime. In recent years, the imposed regime is commonly called “neoliberalism.” It is not a very good term: the social-economic regime is not new, and it is not liberal, at least as the concept was understood by classical liberals. With such provisos as these, it is fair to say that a common enemy of democracy and development in the modern period is neoliberalism, though it has earlier guises. The statement is debatable in the case of development, much less so in the case of democracy. The very design of neoliberal principles is a direct attack on democracy. I will return to that, but first let us consider development.
Whether neoliberalism is the enemy of development is debatable, for a simple reason: the economy—particularly the international economy—is so poorly understood and involves so many variables that even when close correlations are found, one cannot be confident about whether there are causal relations, or if so, in which direction. The founder of the modern theory of economic growth, Nobel laureate Robert Solow, commented that despite the enormous accumulation of data since his pioneering work half a century ago, “the direction of causality” is unknown. It is not clear, he concludes, whether capital investment causes productivity, or productivity leads to capital investment; whether openness to trade improves economic growth, or growth leads to trade; and the same problems arise in other dimensions. One prominent economic historian, Paul Bairoch, argues that protectionism, paradoxically, has commonly increased trade. The reason, he suggests, is that protectionism tends to stimulate growth, and growth leads to trade; while imposed liberalization, since the eighteenth century, has fairly consistently had harmful economic effects. The historical record provides substantial evidence that “historically, trade liberalization has been the outcome rather than the cause of economic development” (Ha-Joon Chang), apart from the “development” of narrow sectors of great wealth and privilege who benefit from resource extraction.1
From an extensive review, Bairoch concludes that “It is difficult to find another case where the facts so contradict a dominant theory [as the theory] concerning the negative impact of protectionism.” The conclusion holds into the twentieth century, when other forms of market interference become more prominent, to which I will return.
The “dominant theory” is that of the rich and powerful, who have regularly advocated liberalization for others, and sometimes for themselves as well, once they have achieved a dominant position and hence are willing to face competition on a “level playing field”—that is, one sharply tilted in their favor. The stand is sometimes called “kicking away the ladder” by economic historians: first we violate the rules to climb to the top, then we kick away the ladder so that you cannot follow us, and we righteously proclaim: “Let’s play fair, on a level playing field.”
Until the 1920s, the United States was “the mother country and bastion of protectionism,” Bairoch writes, well beyond the rest of the world, and had the fastest growth rate, becoming by far the world’s most powerful economy, and after World War II, the dominant global power as well. The basic programs of economic development were established as soon as the United States gained its independence from England. Their author was Alexander Hamilton, the pioneer of import substitution industrialization—an error according to economic doctrine then and now, a regular foundation for development in actual history. Hamilton determined to violate the injunctions of the greatest economists of the day, who urged that Americans should import superior British manufactures and concentrate on their comparative advantage in primary resource and agricultural export. Adam Smith warned that “were the Americans, either by combination or by any other sort of violence, to stop the importation of European manufactures, and, by thus giving a monopoly to such of their own countrymen as could manufacture the like goods, divert any considerable part of their capital into this employment, they would retard instead of accelerating the further increase in the value of their annual produce, and would obstruct instead of promoting the progress of their country towards real wealth and greatness. This would be still more the case were they to attempt, in the same manner, to monopolize to themselves their whole exportation trade.”2 Surely among the most spectacularly refuted predictions in economic history, but solidly based in the abstract theories that continue to be imposed on the weak.
Long before World War II, as noted, the United States already had by far the world’s largest economy. The Great Depression set it back substantially, but New Deal measures stabilized the economy and provided many needed institutional correctives. The semi-command wartime economy overcame the depression with deficit spending far beyond what is regarded as lethal today, yielding the most remarkable growth rate in manufacturing in economic history and laying the basis for unprecedented postwar economic growth, while competitors were severely weakened or destroyed. U.S. industrial production more than tripled during the war, and by its end the United States possessed about half the wealth of the world, along with incomparable security. By then, U.S. business and the state that largely caters to its interests were willing to sponsor a limited version of free trade, in part for geostrategic reasons, yet reasonably confident that U.S. firms should be able to prevail in “free competition.” But as in the case of the British before them, they made sure that the bet was hedged with many crucial market interventions to ensure that the powerful would prevail. I will return to these mechanisms, generally ignored by free-market enthusiasts.
The United States was not forging a new path. On the contrary, it was following the practice of its predecessor in global dominance: England. England did finally adopt a liberal agenda in 1846, after over a century of intense protectionism and state intervention had left it so far in the lead in industrialization that competition seemed relatively safe. The process can be traced to the Tudor monarchs of the fifteenth and sixteenth centuries, who intervened radically in the economy in violation of market principles to create a textile industry, meanwhile destroying competitors and after a century, gaining enough foreign exchange through exports to fuel the incipient industrial revolution. For English manufacturers, the liberalization of the mid-nineteenth century, allowing agricultural imports, brought the benefits of lowered wages and increased profits, and also helped to “halt the move towards industrialization on the Continent by enlarging the market for agricultural produce and primary materials,” economic historian Charles Kindelberger observes. It was not simply the triumph of “sound economic principles,” as conventionally portrayed.3
Even with its enormous advantages in the mid-nineteenth century, England took few chances, maintaining major protected markets, primarily in India. State violence also created by far the most extensive narcotrafficking industry in world history; much of India was conquered in an effort to monopolize opium production, which did not quite succeed, thanks to the enterprise of Yankee traders. The primary goal was to break into the China market by gunboats and opium, since the Chinese had little interest in British manufactures, finding their own quite adequate. That violent interference with markets succeeded. The China market was opened by the “poison trade” and the “pig trade,” as they were called. The poison trade turned China into a nation of opium addicts; the pig trade brought kidnapped Chinese workers to the United States to build railroads and perform other hard labor. Profits from the narcotrafficking racket also covered the costs of the Royal Navy and its imperial role, the administration of India, and the purchase of U.S. cotton, which fueled the early industrial revolution much in the way oil does today. And cotton production of course was also not exactly a free-market miracle: it was based on elimination of the indigenous population by state violence, and on slavery, rather extreme forms of market interference that do not fall within economic history.4
In passing, we may note that slavery did not end with the Civil War, despite the Constitutional Amendments that prohibited it in principle. The war was followed by a decade of partial freedom for African Americans, but by 1877, with the end of Reconstruction, slavery was reconstituted in a new and even more sadistic form, as Black life was effectively criminalized and sentencing was rendered permanent by various means, while brutalizing prison labor provided a large part of the basis not only for agricultural production, as under chattel slavery, but also for the American industrial revolution of the late nineteenth and early twentieth centuries. The savagery of the practices lent some shameful support to the claims of slave owners that they were more humane than the northern capitalists who “rented” labor, because those who owned people were concerned to sustain their capital investments. These horrifying practices continued until World War II, when free Black labor was needed, and in the postwar boom there was a window of opportunity for the Black population. The neoliberal turn in the past thirty years substantially closed the window as domestic manufacturing industry was displaced in favor of financialization of the economy and neoliberal globalization. A new form of criminalization was instituted, much of it in the context of the “drug wars,” leading to a huge increase in incarceration, mostly targeting minorities, reaching to levels vastly beyond comparable countries—in fact beyond any countries that have meaningful statistics. This form of control of “superfluous populations” also provided a new supply of prison labor in state or private prisons, much of it in violation of international labor conventions. Ever since the first slaves were brought to the colonies, life for most African Americans has scarcely escaped the bonds of slavery, or sometimes worse.5
While protectionism and state violence greatly benefited England and the United States, and the rich industrial countries generally, the liberalization that was imposed by the imperial powers pretty much created the third world. “It is no exaggeration to say that the opening of the colonial economies—of course by force—was one of the major reasons for their lack of development,” Bairoch concludes, with the support of other economic historians.
A comparison of the United States and Egypt in the early nineteenth century is one of many enlightening illustrations of the role of sovereignty and massive state intervention in economic development. Having freed itself from British rule, the United States was able to disregard economic theory and adopt British-style measures of large-scale state intervention and protectionism, and it developed. Meanwhile British power was able to bar anything of the sort in Egypt, joining with France to impose Lord Palmerston’s doctrine that “no ideas therefore of fairness towards [Egypt’s modernizing autocrat] Mehemet [Ali] ought to stand in the way of such great and paramount interests” as barring competition with Britain in the eastern Mediterranean. Palmerston expressed his “hate” for the “ignorant barbarian” who dared to undertake economic development, an effort that was beaten down by imperial violence.6
Historical memories resonate when today Britain and France, fronting for the United States, demand that Iran suspend all activities related to nuclear and missile programs, including research and development, so that nuclear energy is barred and the country that is perhaps under the greatest threat of any in the world is denied a deterrent against attack by the United States and its Israeli client: U.S. missile defense programs in Eastern Europe and the Gulf region, to which we return, are in part a thinly disguised contribution to these ends. We might also recall that France and Britain played the crucial role in development of Israel’s nuclear arsenal, and that U.S. neocons strongly advocated nuclear programs in Iran while it was under the rule of the U.S.-imposed tyrant. Imperial sensibilities are delicate indeed.
Had it enjoyed sovereignty, Egypt might have undergone an industrial revolution in the nineteenth century. It shared many of the advantages of the United States. Egypt had rich agriculture, including cotton, an incipient development of manufacture, and an indigenous labor force that would have avoided the need to rely on extermination and slavery. But it lacked one crucial factor: independence, which allowed the United States to impose very high tariffs to bar superior British goods—first textiles, later steel and others. As already mentioned, the sovereign United States became the world’s leader in protectionism, and economic growth. Egypt in contrast stagnated and declined.
Other comparisons suggest similar conclusions about the crucial role of sovereignty and the ability to enter the international system on terms of one’s own choosing. One of the leading historians of Africa, Basil Davidson, observes that modernizing reforms in West Africa in the late nineteenth century were similar to those implemented by Japan at the same time, and believes that the potential for development “was in substance no different from the potential realized by the Japanese after 1867.” An African historian comments that “the same laudable object was before them both, [but] the African’s attempt was ruthlessly crushed and his plans frustrated” by British force. West Africa joined Egypt and India, not Japan and the United States, which were able to pursue an independent path, free from colonial rule and the strictures of economic rationality.7
The Haiti-Taiwan case, noted earlier, is another example. And these are not unusual, but more like the norm.
The hazards of what is now called “neoliberalism” were recognized quite early. One prominent example is Adam Smith. The term “invisible hand” appears only once in his classic Wealth of Nations. His primary concern was England. He warned that if English merchants and manufacturers were free to import, export, and invest abroad, they would profit while English society would be harmed. But that is unlikely to happen, he argued. The reason is that English capitalists would prefer to invest and purchase in the home country, so as if by an “invisible hand,” England would be spared the ravages of economic liberalism. The other leading founder of classical economics, David Ricardo, drew similar conclusions. Using his famous example of English textiles and Portuguese wines, he concluded that his theory of comparative advantage would collapse if it were advantageous to the capitalists of England to invest in Portugal for both manufacturing and agriculture. But, he argued, thanks to “the natural disinclination which every man has to quit the country of his birth and connections,” and “fancied or real insecurity of capital” abroad, most men of property would “be satisfied with the low rate of profits in their own country, rather than seek a more advantageous employment for their wealth in foreign nations,” feelings that “I should be sorry to see weakened,” he added. We need not tarry on the force of their arguments, but the instincts of the classical economists were insightful.8
The post–World War II period conforms closely to these conclusions. There have been two phases. The first was under the economic regime established by the United States and Britain at Bretton Woods after the war, negotiated by Harry Dexter White for the United States and John Maynard Keynes for England. They shared the belief that economic sovereignty is a crucial factor in growth. The system they designed was based on capital controls and regulated currencies in order to protect economic sovereignty, and to permit state intervention to carry out social democratic measures. The regime lasted for about twenty-five years, and was extremely successful by historical standards. By the mid-1970s, the system was gradually replaced in parts of the world by neoliberal principles. The outcomes should surprise no one familiar with economic history. Growth slowed and became far more inegalitarian. The effects were most severe in the countries that most rigidly observed the neoliberal rules: Latin America and Africa, in particular. There were exceptions to the generally harmful impact of neoliberal policies: the countries that rejected the rules, notably the “tigers” of East Asia and China. The basic conclusions were summarized by José Antonio Ocampo, the executive secretary of the Economic Commission for Latin America and the Caribbean: “The period of fastest growth in the developing world in the postwar period, and most prolonged episodes of rapid growth (the East-Asian or the most recent Chinese and Indian ‘miracles’ or, in the past, the periods of rapid growth in Brazil or Mexico), do not coincide with phases or episodes of extensive liberalization, even when they involved a large scale use of the opportunities provided by international markets.”9And we may add that the same has been true internally in the industrial societies, though they have means to protect themselves.
Reviewing the neoliberal experience of the preceding quarter century, a study of the Center for Economic and Policy Research finds that it has been accompanied by slower rates of growth and reduced progress on social indicators—the most meaningful measure of social health. That holds for countries from rich to poor. In a detailed analysis, economist Robert Pollin found that “the overall growth pattern is unambiguous…there has been a sharp decline in growth in the neoliberal era relative to the developmental state period” that preceded, a decline of over half, a “downward growth trend [that] is even more dramatic” when measured per capita, with increase in inequality and little or no reduction of poverty (when China, which rejected the neoliberal policies, is excluded), and devastating side effects among the most vulnerable. Political economist Robert Wade observes that “one of the big—and underappreciated—facts of our time [is the] dramatic growth slowdown in developed and developing countries” in the quarter century of neoliberal economic policy, including, probably, increase in poverty and inequality within and between countries when China is removed and realistic poverty measures are used. “What is striking,” he writes, is that virtually all countries that developed rapidly “maintained policy regimes that would mark them as serious failures by neoliberal criteria…while many of the best pupils” have done quite poorly.10
Similar conclusions have been reached in other studies. To mention one, international economist David Felix shows that trade growth slowed in the neoliberal period in the rich (G-7) societies (with the sole exception of the United States, which had been well below the G-7 average). The same is true of growth of gross fixed investment. Capital flow of course sharply increased, but “the flows have been transferring ownership but little real resources on balance.” Furthermore, “the growth of labor, capital, and total factor productivity have all fallen precipitously since the 1960s in the OECD [Organisation for Economic Co-operation and Development] countries.”11
In brief, the twenty-five years of economic sovereignty, state-coordinated economic growth, and capital controls under the Bretton Woods system led to better social and economic results than the following twenty-five years of neoliberalism, by just about every relevant measure, and by significant margins. It is important to stress that the results include social indicators. In the United States, for example, growth during the Bretton Woods period was not only the highest ever over a lengthy period, but was also egalitarian. Real wages closely tracked increase in productivity, and social indicators closely tracked growth. That continued until the mid-1970s, when neoliberal policies began to be imposed. Growth continued, but gains were heavily skewed toward the rich, spectacularly so for the very rich. Productivity continued to increase, though more slowly, but real wages for the majority stagnated and the profits went into few pockets, increasingly so in the Bush II years. From 1980 to 1995, real wages for average American workers declined about 1 percent, far more sharply for those lower on the income ladder. A tech bubble raised wages in the late ’90s, but after it burst, the stagnation and decline continued, worsening in the Bush years, which also left a long-lasting fiscal burden as a result of sharp tax cuts and war spending, greater than the effect of Obama-era stimulus and bailout, contrary to much fevered commentary. From 1975, social indicators began to decline, reaching the level of 1960 by the year 2000, the latest results available.12
The facts are sometimes obscured by the observation that conditions have generally improved under the neoliberal regime, but that is uninformative; conditions almost invariably improve over time by gross measures. Another way of obscuring the facts is by muddling export orientation with neoliberalism, so that if a billion Chinese experience high growth with high exports under policies that violate neoliberal principles, then the increase in average global growth rates can be hailed as a triumph of the principles that China violated. The harmful tendencies associated with neoliberal policies are consistent with economic history over a much longer term.
I mentioned that other factors intervene in the twentieth century in the rich industrial societies, to some extent before as well. The most important of them is the role of the state sector in the economy, often under the guise of “defense.” Such measures have played a prominent role in technological and industrial development since the early days of the industrial revolution. That included major advances in metallurgy, electronics, machine tools, and manufacturing processes, including the American system of mass production that astounded nineteenth-century competitors and set the stage for the automotive industry and other manufacturing achievements, based in substantial part on many years of investment, research and development, and experience in weapons production within U.S. Army arsenals. Management of the most complex industrial system in the nineteenth century, railroads, was well beyond the capacities of private capital. It was therefore handed to the U.S. Army. A century ago, some of the hardest problems of electrical and mechanical engineering, and metallurgy, had to do with placing a huge gun on a moving platform aimed at another moving object—naval gunnery. In this case England and Germany were in the lead, and the advances made within the state sector soon spun off to private industry.13
These processes underwent a qualitative leap forward after World War II, this time primarily in the United States, as defense provided a cover for creation of the core of the modern high-tech economy: that includes computers and electronics generally, telecommunications and the Internet, automation, lasers, the commercial aviation industry (and with it tourism, a huge service industry), containers and therefore contemporary trade, and much else, now extending to pharmaceuticals and biotechnology as well as nanotechnology, neuroengineering, and other new frontiers. Economic historians have pointed out that the technical problems of naval armament a century ago were roughly comparable to manufacture of space vehicles, and the enormous impact on the civilian economy might be duplicated as well, enhanced by current space militarization projects—a major threat to survival, but a stimulus to the advanced economy. Public funding of research and development from the 1950s to the end of the century accounted for 50–70 percent of research and development, often under the guise of defense. The figures underestimate the reality, because they do not take account of the difference in R&D in the public and private sectors, the former typically more fundamental (hence with greater risk and cost and more significant long-term impact), the latter tending to be more commercially oriented (consumer electronics, copycat drugs, etc.). It is hardly an exaggeration to say that the much-praised “New Economy” is in large part a product of the state sector. Nobel laureate Joseph Stiglitz writes that “a report by the Council of Economic Advisers (conducted when I was its chair) found that the returns on public investment in science and technology were far higher than for private investment in these areas…than for conventional [private] investment in plant and equipment.”14
One effect of incorporating national security exemptions in the mislabeled “free trade agreements” is that the rich industrial societies, primarily the United States, can maintain the state sector, on which the economy heavily relies to socialize cost and risk while privatizing profit. For most of the world, the exemptions mean nothing.
Governments and business understand this very well. Germany at first was critical of the missile defense programs, recognizing the dangers they pose to survival. But German chancellor Gerhard Schroeder backed away from his critical stance because, as he said, Germany would have a “vital economic interest” in developing ballistic missile defense technology, and must be sure that “we are not excluded” from technological and scientific work in the field, mostly subsidized by the U.S. taxpayer. Similarly, the U.S. trade organization lobbying for missile defense advised Japanese officials in 1995 that this may be “the last military business opportunity for this century,” so they had better come on board.15 In the new century, more advanced military programs provide many other opportunities for private capital to profit from public expenditures. All of this enhances the threat to decent survival, but that has always been a secondary consideration.
The state sector is central to innovation and development not only in national laboratories and universities, but also in many other ways: subsidy to corporations, procurement, guaranteeing monopoly pricing rights in the “free trade agreements,” and other devices. The failure of the economics profession to attend to these factors is sometimes startling, perhaps similar to the myths about development that Bairoch and other economic historians have discussed. Until the crash of the financial markets he administered in 2007–8, the world’s most revered economist, I suppose, was Alan Greenspan, so let us take him as an example. In one of his orations on the miracles of the market, based on entrepreneurial initiative and consumer choice, he went beyond the usual rhetorical flourishes and gave actual examples: the Internet, computers, information processing, lasers, satellites, and transistors. The list is interesting: these are textbook examples of creativity and production taking place substantially in the public sector, mostly the Pentagon, in some cases for decades, with consumer choice approximately zero during the crucial development stages and entrepreneurial initiative mainly at the marketing end, and relying heavily throughout on the state sector to acquire technology and skills. The Internet was largely within the state sector for about thirty years before it was handed over to private capital in 1995. In the 1950s, computers were enormous, with vacuum tubes regularly blowing up, paper flowing all over with programs punched in, hours of wait for the simplest operation. When the Pentagon-financed systems were reaching the stage where they might soon be sold for profit, several of the leading engineers left the main government laboratory and founded DEC (Digital Equipment Corporation), which went on to become a primary driving force in the computer industry into the early 1980s, when personal computers became available. Meanwhile IBM was using Pentagon-funded computer systems to learn how to shift from punched cards to computers. IBM was able to manufacture the world’s fastest computer in 1961, but its price tag was too high for the market, so it was sold to the government’s Los Alamos laboratory. Procurement has regularly been a successful way for the public to subsidize private industry.16
Of the examples that Greenspan gives, the only one that does not come directly from the state sector is transistors. They were developed in a private laboratory, the Bell Telephone Laboratories, which also made major contributions to science over a broad range. But the role of markets in transistor development was slight. The parent corporation AT&T had a government-guaranteed monopoly on telephone service, so it could tax the public through high prices. Furthermore, the lab used wartime technology, again publicly subsidized and state-initiated. And for years high-performance transistors were too expensive for the private sector so they were purchased only by the military. When the AT&T monopoly was terminated, the great laboratory declined because of lack of public subsidy, and its successors now concentrate on short-term projects for profit.17
The role of the state is not just to create and protect high-tech industry. It also intervenes to overcome management failures. That became a serious problem again by the 1970s, as it had been during the development of railroads. The business world was becoming concerned over low rates of productivity and investment growth and the failure of U.S. management to keep up with more advanced foreign methods. The business press was calling for “the re-industrialisation of America.” The military was once again called up to help. One major Pentagon program of the ’70s, MANTECH (manufacturing technology), doubled its outlays as Reagan took over. One of its tasks was to design the “factory of the future,” integrating computer technology and automation in production and design and developing flexible manufacturing technology and management efficiency, in an effort to catch up with Europe and Japan. The goal was to boost the market share and industrial leadership of U.S. industry in the traditional way, through state initiative and taxpayer funding. There was also a side benefit: the factory of the future could be designed to control the workforce. That is an old story. For example, automation and computer-controlled machine tools were developed in the public sector for a long period, then finally handed over to private industry. Within the state sector the technology was designed in a specific way: to de-skill workers and enhance management control. That choice was not inherent in the technology and does not appear to have been more profitable. But it is a powerful weapon in class war. The topic was well studied by then–MIT professor David Noble in important work.18
These programs expanded under the Reagan administration, which went beyond the norm in violating market principles for the rich, while excelling in elevated rhetoric about the need for market discipline for the poor. Under Reagan, Pentagon-supported research promoted new technologies in many areas, including supercomputers and information technology, further improvement of the Internet (which was initiated under Pentagon funding), and others. The Reagan administration also virtually doubled protective barriers, breaking all postwar records in protectionism. The purpose was to keep out superior Japanese products: steel, automotive, semiconductors, computers, and others. The goal was not only to save domestic industries that could not compete, but also to place them in a dominant position for the 1990s—now called a “triumph of the market,” thanks in large measure to public subsidies, public sector innovation and development, protection, straight bailouts, and other devices.
It might be mentioned that Reaganite dedication to state intervention and protectionism also retraced the British experience. When British manufacturers could no longer compete with Japan, Britain abandoned its (partially rhetorical) love for free trade and closed off the empire to Japanese imports, as did the other Western powers in their smaller Asian empires. This is an important part of the background for World War II in the Pacific.
Radical state intervention in the economy continues to the present, now shifting toward the biology-based industries that are the cutting edge of the next phase of the economy and therefore have been receiving rapidly increasing state funding for R&D, though it declined under the Bush II administration, in part because of the administration’s relative lack of concern for the health of the society as compared with the much greater imperatives of enriching the rich even further (for example, by huge tax cuts for the rich) and shaking their fist to intimidate the world.
The private sector is assisted by state intervention in other ways. One central component of the World Trade Organization rules, crafted by a few powerful states and the multinational corporations closely linked to them, is an array of provisions (TRIPS—trade-related aspects of intellectual property rights) to guarantee monopoly pricing to huge corporations by means of a patent regime that exceeds anything in history and would have seriously impeded economic development in the rich societies had it existed during their growth period. It serves primarily to enrich private corporations that rely heavily on the state sector for R&D. Economist Dean Baker, one of the few to have investigated the phenomenon carefully, concluded that if public funding for R&D for pharmaceuticals was increased to 100 percent and the corporations were compelled to sell at market prices, the savings to consumers would be colossal—not to speak of the lives saved. But “really existing capitalism” works in different ways. In recent years, U.S. negotiators have been seeking to extend monopoly pricing rights in bilateral agreements. The director of the World Health Organization in Thailand warned that hundreds of thousands of Thai citizens would be put at risk if Thailand accepted U.S. demands. He was abruptly transferred to a lesser position in India. There are many similar examples.19
These are among the many reasons one should avoid such terms as “free trade agreement.” The states that design the agreements and impose them on others are not in favor of free markets, or other forms of liberalization, except selectively, and for temporary advantage. Another reason for avoiding the term is that much of what is called “trade” is an ideological construction. In the old Soviet Union, if parts were produced in Leningrad, sent to Poland for assembly, and sold in Moscow, borders would have been crossed, but it was not called “trade” in Western commentary: rather, interactions within a command economy. And the same is true when GM produces parts in Indiana, sends them to Mexico for assembly, and sells the cars in New York. Since corporations are largely unaccountable to the public, the scale of such “trade” is unknown, but it is generally estimated at about 40 percent of trade—higher for U.S.-Mexico—and if we include outsourcing and other forms of market distortions the percentage would rise much higher. The concept of “trade” is further deprived of meaning when we consider “trade in services,” meaning privatization of services, a term that includes just about anything of concern to people: education, health, energy, water and other resources, etc. The term “trade” in this case is a euphemism for turning human life over to unaccountable state-supported private tyrannies. And finally, the “agreements” are not agreements, at least if people are considered to be part of their societies: the treaties are generally opposed by the populations, and therefore have to be established mostly in secret, or under “fast track” provisions that provide the state executive with Kremlin-style controls, with Congress given the right to say yes (and in principle no, but without serious discussion or information), and with the public virtually excluded, thanks in part to media complicity.20
In the phrase “North American free trade agreement,” the only accurate words are “North American.”
At this point we are reaching the second of the two issues raised at the outset: neoliberalism as an enemy of democracy. While the evidence indicates that imposed liberalization has generally been harmful to development over history, causal relations can be debated even when there are striking correlations, because of limits of understanding and the complexity of factors. Much less so in the case of neoliberalism and democracy, however. Just about every element of the neoliberal package is an attack on democracy. In the case of privatization, that is true by definition: privatization transfers enterprises from the public to the private domain. In the public domain they are under some degree of public control, at least in principle; in more democratic societies, that could be a considerable degree, and in still more democratic societies, which barely yet exist, they would be under the direct control of “stakeholders”: workers and communities. But the private domain is virtually unaccountable to the public in principle, except by regulatory mechanisms that are typically quite weak thanks to the overwhelming influence of concentrated private capital on the state.
Of these measures, the most severe attack on democracy is privatization of services. With services privatized, democratic institutions may exist but they will be mostly formalities, because the most important decisions that affect people’s lives will have been removed from the public arena.
The argument for privatization is supposed to be efficiency. If the argument were valid, it would pose a conflict of values: efficiency versus freedom. But it is doubtful that the question even arises. As privatization became the mantra of the World Bank and IMF, a number of studies were undertaken to compare performance of private and public enterprises. One major study was carried out under the auspices of the UN Conference on Trade and Development by Cambridge University economists Ha-Joon Chang and Ajit Singh. They found what one might expect: in well-functioning societies, both private and public enterprise tend to be efficient; in societies that are more corrupt and function poorly, the same will be true of private and public enterprises.21 In some areas the private sector performs much more poorly. In public enterprise in the industrial world, it would be hard to match the extraordinary corruption of Enron and WorldCom, or the extreme inefficiency of the health system in the United States, the only scarcely regulated privatized system among the industrial economies, with twice the per capita costs of others and some of the worst health outcomes, and the only one in which the government is barred by law from negotiating prices with pharmaceutical corporations—a “free market” policy extended by President Obama, bowing to the successful business campaign to undermine reforms that might cut into profit. Infant mortality in the United States has risen in the past few years to the level of Malaysia, just to mention one illustration. Actually “inefficiency” is not the right word. The health system is highly efficient in performing its institutional role of enriching investors.22
Chile is often hailed as the model of a free-market economy. It did indeed follow neoliberal rules after the Pinochet coup, when they could be imposed by violence, under the guidance of the famous “Chicago boys.” Their mentors collected their Nobel Prizes while the economy collapsed and had to be bailed out by the state, which by 1982 controlled more of the economy than under Allende; the process was called “the Chicago road to socialism,” international economist David Felix recalls. Economist Javier Santiso of the OECD Development Center terms it a “paradox” that “able economists committed to laissez-faire showed the world yet another road to a de facto socialized banking system”; no paradox, to those familiar with economic history. Chile did manage to recover, but by a complex mixture of market reliance and state intervention, including a form of capital control (violating the core principle of neoliberalism) and state ownership of the world’s largest copper producer, Codelco, another radical violation of neoliberal principles, and the source of much of Chile’s export earnings and the state’s fiscal revenues. As the Financial Times observes, after the “catastrophic banking crisis of 1982, the product in part of economic policies pursued by the radical free-marketers known as the Chicago boys, [Chile] cooled its ideological fervor” and by the 1990s “controlled its exposure to world financial markets and maintained its efficient copper company in public hands,” somewhat protecting itself from market disasters by these and other measures.23
It remains to look at the central doctrine of neoliberalism: financial liberalization, which began to take off from the early 1970s. Some of its effects are well known. There was a huge increase in speculative capital flows and countries were forced to set aside much larger reserves to protect currencies from attack, in both cases removing capital from productive use. It is striking, and well known, that countries that maintained capital controls avoided some of the worst financial crises (among them India, China, and Malaysia, during the Asian crisis of 1997–8). In the United States, the share of the financial sector in corporate profit rose from a few percent in the 1960s to over 30 percent in 2004. Concentration also sharply increased, thanks substantially to the deregulatory zeal of the Clinton administration, which set the stage for the doubling of the share of banking industry assets held by the twenty largest institutions to 70 percent from 1990 to 2009, helping create the “too big to fail” disaster of 2007–8. Financialization of the economy had a direct effect on the dismantling of the manufacturing sector, along with other policy decisions, such as the “trade agreements” that were designed to set manufacturing workers in competition with low-wage workers without benefits and protections elsewhere, while evading the “free trade” principle of competition in the case of highly educated professionals.24
The business press sometimes recognizes the dilemmas of the state-corporate economic policies—and also has few illusions about “free markets.” In a cover story on the question “Can the Future Be Built in America?” Business Week outlines the basic problem. The United States, it reports, “is at or near the cutting edge in most of the emerging product areas. Indeed, the new wave of high-tech devices hitting the market is the payoff from billions of dollars in taxpayer-funded research at federal and university science labs stretching back to the 1960s.” However, “the U.S. is losing its lead in large-scale high-tech manufacturing,” because of lack of concern for manufacturing by corporate and state economic managers. The result is the “‘invented here, industrialized elsewhere’ syndrome,” as the United States becomes “a big funnel of R&D for Asia.”25
These are natural consequences of the financialization of the economy in an era of neoliberal globalization. For the “principal architects of policy,” it is entirely reasonable to shift production abroad while the taxpayer funds R&D, and financial manipulations are concentrated at home. The effects on the society at large may be “grievous”—perhaps even long-term effects for the masters—but these are secondary matters at best, in accord with Adam Smith’s maxim along with the institutional constraints on decision making in quasi-market systems in which inherent market inefficiencies are magnified by state-provided perverse incentives, matters to which we return.
Among the consequences is probably the greatest inequality in U.S. history. There is also a global analogue: the creation of what is called “plutonomy” in an upbeat analysis by Citigroup, a bank that is once again feeding at the public trough, as it has done regularly for thirty years in a cycle of risky loans, huge profits, crash, bailout. The bank’s analysts describe a world that is dividing into two blocs—the plutonomy and the rest. The United States, UK, and Canada are the key plutonomies, economies in which growth is powered by, and largely consumed by, the wealthy few, joined by scattered islands of wealth elsewhere in a global plutonomy. In plutonomies, they write, there are rich consumers, few in number, but disproportionate in the gigantic slice of income and consumption they take. Then there are the “non-rich,” the vast majority, who only account for surprisingly small bites of the national pie. Two-thirds of the world’s economic growth, they estimate, is driven by consumption, primarily by the plutonomies, who of course monopolize profits as well.
The Citigroup analysts are providing advice to investors: investment strategy, they advise, should focus on the very rich, where the action is. Their “Plutonomy Stock Basket,” as they call it, far outperformed the world index of developed markets since 1985, when the Reagan-Thatcher economic programs of enriching the very wealthy were really taking off. This is a substantial extension of the “80-20 rule” that is taught in business schools: 20 percent of your customers provide 80 percent of the profits, and you may be better off without the other 80 percent. The business press explained years ago that modern information technology—in large measure a gift from an unwitting public—allows corporations to identify profitable customers and provide them with grand treatment, while deliberately offering skimpy services to the rest, whose inquiries or complaints can be safely sidetracked, creating a profitable form of “consumer apartheid.” The experience is familiar, and carries severe costs—how great when distributed over a large population, we don’t know, because they are not included among the highly ideological measures of economic efficiency. Now this principle of economic rationality can be sharpened further and generalized worldwide, Citigroup cheerily proclaims.26
Sometimes the effects are surreal. It is finally dawning on the last holdouts in the business sector that the growing environmental crisis is severe. Even the Wall Street Journal, one of the most stalwart deniers, ran a supplement with dire warnings about the “climate disaster,” urging that none of the options being considered may be sufficient and it may be necessary to undertake more radical measures of geoengineering, “cooling the planet” in some manner.27 Many also understand that it will be necessary to reverse the vast state-corporate social engineering programs since World War II, designed to promote an energy-wasting and environmentally destructive fossil fuel–based economy, and that a central element of these changes will have to be development of efficient high-speed rail systems. It is revealing to see how the problem is being addressed.
The Wall Street Journal reported that “U.S. transportation chief [Ray Lahood] is in Spain meeting with high-speed rail suppliers.… Europe’s engineering and rail companies are lining up for some potentially lucrative U.S. contracts for high-speed rail projects. At stake is $13 billion in stimulus funds that the Obama administration is allocating to upgrade existing rail lines and build new ones that could one day rival Europe’s fastest.… [Lahood is also] expected to visit Spanish construction, civil engineering and train-building companies.”28
Spain and other European countries are hoping to get U.S. taxpayer funding for the high-speed rail and related infrastructure that is badly needed in the United States. At the same time, Washington is busy dismantling leading sectors of U.S. industry, ruining the lives of the workforce and communities. It is difficult to conjure up a more damning indictment of the economic system that has been constructed by state-corporate managers. Surely U.S. manufacturing industries could be reconstructed to produce what the country needs, using its highly skilled work force—and what the world needs, and soon, if we are to have some hope of averting major catastrophe. It has been done before, after all. During World War II, industry was converted to wartime production and the semi-command economy not only ended the Depression, but initiated the most spectacular period of growth in economic history as the economy was retooled for war, also laying the basis for the “golden age” that followed.29
The state-corporate leadership has other commitments, but there is no reason for passivity on the part of the “stakeholders”—workers and community. With enough popular support they could take over the plants and carry out the task of conversion themselves. That is not a particularly radical proposal. One standard text on corporations points out that “nowhere is it written in stone that the short-term interests of corporate shareholders in the United States deserve a higher priority than all other corporate ‘stakeholders.’” There have been some important concrete efforts. One was undertaken thirty years ago in Youngstown, Ohio, where U.S. Steel was about to shut down a major facility that was at the heart of the life of this steel town. There were substantial protests by the workforce and community, then an effort led by labor lawyer and activist Staughton Lynd to bring to the courts the principle that stakeholders should have the highest priority. The effort failed that time, but with enough popular support it could succeed.30
It is worth remembering that such ideas have deep roots in American history and culture. In the early days of the industrial revolution in New England, working people took it for granted that “those who work in the mills should own them.” They also regarded wage labor as different from slavery only in that it was temporary—Abraham Lincoln’s view as well. The leading twentieth-century social philosopher, John Dewey, basically agreed. Much like nineteenth-century working people, he called for elimination of “business for private profit through private control of banking, land, industry, reinforced by command of the press, press agents and other means of publicity and propaganda.” Industry must be changed “from a feudalistic to a democratic social order” based on workers’ control, free association, and federal organization, in the general style of a range of thought that includes, along with many anarchists, G. D. H. Cole’s guild socialism and such left Marxists as Anton Pannekoek, Rosa Luxemburg, Paul Mattick, and others, including the late Seymour Melman, who studied the matter in some depth for many years. Unless those goals are attained, Dewey held, politics will remain “the shadow cast on society by big business, [and] the attenuation of the shadow will not change the substance.” He held that without industrial democracy, political democratic forms will lack real content, and people will work “not freely and intelligently,” but for pay, a condition that is “illiberal and immoral”—ideals that go back to the Enlightenment and classical liberalism before it was wrecked on the shoals of capitalism, as the anarchosyndicalist thinker Rudolf Rocker put it seventy years ago.31
There have been immense efforts to drive these thoughts out of people’s heads—to win what the business world has called “the everlasting battle for the minds of men.” On the surface, they seem to have succeeded. But such victories for the masters have often proven to be illusory in the past, and they might be again. If they are, then the question raised by Business Week—“Can the Future Be Built in America?”—might have an answer that would be part of a very different America, one that might realize promises of freedom and justice that have too long been suppressed.
Advocates of the radical shift toward financialization of the economy claim that there are compensating economic advantages, but again we get into murky areas, as mentioned earlier. However, the effect on democracy is immediately visible.
Financial liberalization creates what some international economists have called a “virtual Senate” of investors and lenders, who “conduct moment-by-moment referendums” on government policies. If the virtual Senate determines that the policies are irrational—meaning that they are designed to benefit people, not profit—then it can exercise its “veto power” by capital flight, attacks on currency, and other means. To mention one recent example, after Hugo Chávez was inaugurated, capital flight escalated to the point where capital held abroad by wealthy Venezuelans equaled one-fifth of Venezuela’s GDP, Santiso reports, adding that after the U.S.-backed military coup in 2002 “the response of the markets approached euphoria” and the Caracas Exchange registered huge gains, collapsing when the elected government was restored by popular protests. In general, with capital flow liberalized, governments face a “dual constituency”: voters and the virtual Senate. Even in the rich countries, the private constituency tends to prevail.32
Financial liberalization therefore serves as an effective curb on democracy. Perhaps it is coincidence, perhaps not, but it is worth noticing that financial liberalization was introduced along with the growing concern of elites over what they called the “crisis of democracy” of the 1960s,33 when normally passive and obedient sectors of the society, often called the “special interests,” began to enter the public arena to put forward their concerns. The result was called “excessive democracy” that was too much of an overload for the state, which could not attend properly to the “national interest.” The special interests are women, workers, farmers, the young, the elderly, minorities, majorities—in fact, the general population. The “national interest” is defined by those who own and run the society. I am paraphrasing the liberal internationalist sector of elite opinion, those who staffed the Carter administration in the United States, and their counterparts in Europe and Japan. Farther to the right, and in the business world, the need to overcome the “crisis of democracy” was a still more pressing concern. Many measures have since been employed to purge society of the evil of democracy, right to the present. Financial liberalization made a potent contribution, whether by design or not.
Under powerful public pressure, such measures for undermining democracy were restricted under the Bretton Woods system established after World War II. The Great Depression and the war aroused radical democratic currents, taking many forms, from the antifascist resistance to working-class organization. These pressures made it possible—and from a different point of view, necessary—to permit social democratic policies. The Bretton Woods system was presumably designed in part for that purpose, with the understanding that capital controls and regulated currencies would create a space for government action responding to public will—for some measure of democracy, that is. Keynes considered the most important achievement of Bretton Woods to be establishment of the right of governments to restrict capital movement. In dramatic contrast, in the neoliberal phase that followed, the U.S. Treasury Department now regards free capital mobility as a “fundamental right,” unlike such alleged “rights” as those guaranteed by the Universal Declaration of Human Rights: health, education, decent employment, security, and other rights that the Reagan and Bush administrations dismissed as “letters to Santa Claus,” “preposterous,” mere “myths.”34
In earlier years the public had not been much of a problem. The reasons are reviewed by economist Barry Eichengreen in his standard history of the international monetary system. He observes that in the nineteenth century, governments had not yet been “politicized by universal male suffrage and the rise of trade unionism and parliamentary labor parties.” Therefore the severe costs imposed by the virtual Senate of lenders and investors could be transferred to the general population. But with the radicalization of the general public during the Great Depression and the antifascist war, that luxury was no longer available to private power and wealth. Hence in the Bretton Woods system, “limits on capital mobility substituted for limits on democracy as a source of insulation from market pressures.”35 It is only necessary to add the obvious corollary: with the dismantling of the system from the 1970s, functioning democracy is restricted. It therefore becomes necessary to divert and control the public in some fashion, processes that are particularly evident in the more business-run societies like the United States, a topic that I will have to put to the side despite its extreme importance.
In Latin America, specialists and polling organizations observed for some years that extension of formal democracy was accompanied by increasing disillusionment about democracy and “lack of faith” in democratic institutions. A persuasive explanation for these disturbing tendencies was given by Argentinian political scientist Atilio Boron, who pointed out that the new wave of democratization in Latin America coincided with neoliberal economic “reforms,” which undermine effective democracy. The phenomenon extends worldwide, in various forms. It appears that the tendency may have reversed in recent years with departures from neoliberal orthodoxy and other developments discussed earlier.36
The annual polls on Latin American opinion by the Chilean polling agency Latinobarómetro, and their reception in the West, are interesting in this respect. Few doctrines of reigning Western orthodoxy are upheld with more fervor than the principle that Hugo Chávez is a tyrant dedicated to destruction of democracy. The polls are therefore a serious annoyance, which have to be overcome by the usual device: suppression. The November 2007 poll had the same irritating results as in the preceding few years. Venezuela ranked second behind Uruguay in satisfaction with democracy and third in satisfaction with leaders. It ranked first in assessment of the current and future economic situation, equality and justice, and education standards. True, it ranked only eleventh in favoring a market economy, but even with this flaw, overall it ranked highest in Latin America on matters of democracy, justice, and optimism, far above U.S. favorites Colombia, Peru, Mexico, and Chile.
Latin America analyst Mark Turner writes that he “found an almost total English speaking blackout about the results of this important snapshot of [Latin American] views and opinions,” as had been the case in earlier years. He also found the usual exception: there were reports of the finding that Chávez is about as unpopular as Bush in Latin America, a fact that will come as little surprise to those who are familiar with the bitterly hostile coverage to which Chávez is subjected in the media, in the Venezuelan press as well, an odd feature of this looming dictatorship.37
In the United States, faith in institutions has been declining steadily, and for good reasons. A considerable gulf has developed between public opinion and public policy, rarely reported, though people can hardly fail to be aware that their policy choices are disregarded. It is interesting to compare near-simultaneous recent presidential elections in the richest country of the world and the poorest country in South America. In the 2004 U.S. presidential election, voters had a choice between two men born to wealth and privilege, who attended the same elite university, joined the same secret society where privileged young men are trained to take their place in the ruling class, and were able to run in the election because they were supported by pretty much the same conglomerations of private power. Their announced programs were similar, consistent with the needs of their primary constituency: wealth and privilege. Studies of public opinion revealed that on a host of major issues, both parties were well to the right of the general population, the Bush Republicans sharply so. In part for these reasons, party managers generally displace issues from the electoral agenda. Few voters even knew the stand of the candidates on issues. Candidates are packaged and sold like toothpaste and cars and lifestyle drugs, and by the same industries dedicated to delusion and deceit.
Furthermore, the destruction of democracy is highly regarded. In the most liberal daily paper of the country, a leading consultant to the Democratic Party presented his advice for the November 2006 congressional elections. Democrats, he wrote, must realize, as Republicans do, that “politics is not about issues. Politics is about identity. The candidates and parties that win are not those aligning their positions most precisely with a majority of the electorate. The winners are those who form a positive image in the public mind of who they are (and a negative image of who their opponents are).” What is important is “symbolism and narrative to shape what the public thinks about,” just as in marketing other commodities.38
His advice was followed in the next presidential campaign, a matter to which we return.
Consider in contrast the December 2005 election in the poorest country in South America, Bolivia. Voters were familiar with the issues, and they were very real and important ones: control of resources, cultural rights for the indigenous majority, problems of justice in a complex multiethnic society, and many others. Voters chose someone from their own ranks, not a representative of narrow sectors of privilege. There was real participation, extending over years of intense struggle and organization. Election day was not just a brief interlude for pushing a lever and then retreating to passivity and private concerns, but one phase in ongoing participation in the workings of the society.
The comparison, and it is not the only one, raises some questions about where programs of democracy promotion are needed.
Latin America has real choices, for the first time in its history. The traditional modalities of imperial control—violence and economic strangulation—are much more limited than before. There are lively and vibrant popular organizations providing the essential basis for meaningful democracy. Latin American and other former colonies have enormous internal problems, and there are sure to be many setbacks, but there are promising developments as well. It is in these parts of the world that today’s democratic wave finds its basis and its home. That is why the World Social Forum has met in Porto Alegre, Mumbai, Caracas, Nairobi, not in northern cities, though by now the global forum has spawned many regional and local social forums, doing valuable work geared to problems of particular significance in their own regions. The former colonies, in Latin America in particular, have a better chance now than ever before to overcome centuries of subjugation, violence, repression, and foreign intervention, which they have so far survived as dependencies with islands of luxury in a sea of misery. These are exciting prospects for Latin America, and if the hopes can be realized, even partially, the results cannot fail to have a large-scale global impact as well.