How So-called Experts Mislead Us about



Download 1.37 Mb.
Page7/15
Date conversion19.02.2016
Size1.37 Mb.
1   2   3   4   5   6   7   8   9   10   ...   15

Patrick Bond described the result this way: “Nearly two years after the demise of the Clinton health-care plan, nearly all of the plan’s right wing critics’ prognostications are coming true—but under the exact opposite circumstances they imagined. Patients are indeed finding their freedom of choice severely limited, but by emerging private oligopolies, not by a national health plan. Huge bureaucracies are making critical health-care decisions for patients, but those bureaucracies are private, not governmental. Waste is in fact widespread but it is private, not public, red tape that is the cause.”149



The quasi-private U.S. Postal System

Years ago when it was widely felt that inefficiency in the government-operated U.S. Post Office was causing burdensome deficits, the operation was spun off into a quasi-private entity. It remained a monopoly and was under broad government supervision. Despite United States Postal Service (USPS) claims to the contrary, mail deliveries have become slower, stamp prices have continued to rise, and the proportion of commercial and fund-raising mass mail at reduced rates has risen sharply in contrast to first class letters used by the general public.

In the same kind of “revolving door” that has developed in the defense sector and various regulatory agencies, postal officials move to jobs with private sector mail sorting corporations while the postal Board of Governors is exclusively made up of corporate executives and compliant politicians. In 1988 a team of private mailing industry executives, publishers, and high volume mailers met with postal officials to restructure rates without any representation of the general public. Industry is now allowed a discount for pre-sorted mail that is nearly 20 times what it would cost the USPS to do the sorting on its own automated equipment. Most of the more than 80,000 workers in the pre-sort industry get minimum wage and have few, if any, benefits.

Postal jobs, once highly prized by large numbers of applicants who competed in civil service examinations, have become so stressful that some workers have snapped and their shooting rampages have created the expression “going postal” as a synonym for going berserk. Beyond this, the quasi-private USPS has contracted out parts of its work to private companies such as Time, Inc., R.R. Donnelly, ITT, and Lockheed. The private sector operations reap the benefits of the millions of dollars spent by the USPS on research and development of new mail sorting technologies, including optical character reading and remote sorting.

Fifteen new contracts for remote sorting were awarded in 1993. Communities and states entered into a bidding war with low wage rates, tax incentives, and outright grants to attract these contracting firms who claimed to be bringing “new” jobs to

communities. Actually, the contractors replaced postal workers who had operated letter sorting machines.150 Pennsylvania offered nearly $3.9 million to DynCorp, whose workers at York earn $6.12 per hour. Oakland, California used at least six state and city agencies to help Envisions convert $13 per hour postal jobs to $8 per hour private jobs. Sarah Ryan commented in her 1995 article in Dollars and Sense, “Privatization turned out to require lots of public resources.”151


Privatizing Social Security

Another target for privatization is Social Security. Proponents of radical changes in the system have sounded alarms, predicting that the retirement of the Baby Boom generation will deplete reserves faster than the workers of the smaller succeeding generation can replenish them. Critics keep referring to the government bonds in the trust funds as “mere IOUs”—a term they never apply to bonds held by individuals, banks, and foreigners.

Rump “Generation X” organizations have been widely quoted in the media as believing Social Security will be bankrupt before their turn for pensions will arrive. This propaganda war has the objective of commercializing Social Security so as to generate profitable commissions for stockbrokers. Several business-financed think tanks have been behind this effort.

The Advisory Council on Social Security issued a split opinion in 1997 that offered three different solutions, varying chiefly in the extent to which Social Security contributions would be diverted from government bonds to the stock market. There were 6 out of 13 votes for a plan to allow some assets to be invested in the stock market, but retain Social Security as one system. The other 7 votes were split between two plans that would divert some FICA contributions to new forms of Individual Retirement Accounts.152

Described by Kevin Phillips as the “eminence grise” of the investment industry’s propaganda network, Peter G. Peterson, Chairman of the Blackstone Group of investment bankers, attacks Social Security in his capacity as president of the Concord Coalition.153 Peterson was commerce secretary to Nixon, an

investment banker since then, and an advocate of a national sales tax. The Concord Coalition, sponsored by former Senator Warren Rudman and the late Senator Paul Tsongas, has proposed a ceiling on taxes for big business and the wealthy but cuts in Social Security and Medicare with means-testing and/or privatization of Social Security.

Co-chairman of the Cato Institute’s “Project on Social Security Privatization,” begun in 1995, are José Pinera, the former labor minister of Chile who privatized that country’s pension system, and William Shipman of State Street Global Advisors, an investment company.154 By January 1977 the Cato Institute had taken its privatization campaign to the state level and legislatures of Colorado, Delaware, Georgia, and Oregon passed resolutions urging Congress to allow states to drop out of the Social Security system and set up their own plans for privatized pensions. This is part of the campaign in which the American Legislative Exchange Council (ALEC), a large coalition including prominent state legislators, passed a resolution as model legislation for state governments calling on the federal government to allow all states to opt out of the Social Security system.

Since advocates of privatizing Social Security have offered Chile as a model, it is worthwhile to look at that country’s experience with privatization.


Pension privatization failure in Chile

The economic measures introduced in Chile by economist Milton Friedman and his disciples from the University of Chicago under the dictator, Gen. Augusto Pinochet, in the mid-1970s have been acclaimed by some as an economic miracle. This is debatable, as will be discussed in a later chapter dealing with discredited classical economics dressed up as neo-classical or neo-liberal. At this point the focus is on privatization of Social Security.

Among other things, they privatized such government services as parks, prisons, utilities, schools, health care, and pensions. When Chile’s state-administered health and pension programs were privatized, the companies got to set service charges and exclude all but the best clients. The armed forces, however, were kept in special state-run programs.155

For ordinary Chileans, the state-run pension system, which had been described as inefficient, was replaced in 1981 by a private system of compulsory private savings. By ignoring commissions, a 12.7% real annual return on investment was claimed for the period between 1982 and 1995. World Bank economist Hemant Shah, however, showed that commissions reduced an individual’s average return to 7.4%, and even lower over other periods of time, further reduced by the cost of financial advice on choosing among options at retirement that can absorb as much as 3% to 5% of the accumulated savings.

By contrast, the U.S. system pays no commissions and administrative costs are in the range of 1% to 2%. Many Chileans, moreover, are not covered in the system because of lax enforcement of the compulsory savings.156
Pension privatization failure in Britain

The British government, in the late 1980s, allowed workers to put part of their pension contributions into personal pension accounts while still paying into a government basic plan that provides about $100 a week minimum pension. Encouraged by a government media campaign and private financial promotions, millions chose the partial privatization, and millions suffered heavy losses. Investment firms owe about $18 billion compensation to victims of their bad advice and are under investigation by Scotland Yard.

Britain’s Pension Investment Authority has estimated that a bail-out of investors’ losses would cost more than $15 billion. According to recent testimony of Prof. Teresa Ghilarducci of Notre Dame University to the House Ways and Means Committee, the privatized system has saved only about $5 billion in government pension costs. Workers have to pay commissions and fees for management of their accounts that run about 20%, creating large profits for financial firms. She added that the few investment and pension companies that control 80% of the business averaged profits of more than 22% in 1995. She said the

reform recommended in the 1997 report of the U.K. Office of Fair Trading “looks a lot like our current U.S. Social Security System.”157
What went wrong in the Soviet Union

The fall of Soviet communism in 1991 was acclaimed in the Pentagon as a victory for American military strength and in the business world as a long-awaited proof of the superiority of capitalism. Russians were to enjoy the blessings of democracy and free enterprise.

Disillusionment soon set in. Communism was out of fashion, but the Communist hierarchy declared themselves ex-Communists and continued to dominate the legislative process. Some early talk about the employees of state-run enterprises becoming the new owners faded away as the Communist bureaucrats who had controlled the economy in the old regime arranged to sell the government-owned factories to themselves at bargain prices.

Despite notable progress on the political front where open expression of opposing views was allowed and elections were contested instead of limited to a one-party slate, the economic changes were disappointing. The main winners were the old Communists, now dressed in capitalist clothing, and the business opportunists with few scruples who were able to take advantage of unsettled conditions. Gangsters and drug lords infested the new capitalist economy.

As the government stopped subsidizing factories, some were privatized (often turned over to the same management that ran them under Communism), some were closed, workers experienced unemployment for the first time, and other workers (including military and civilian government personnel) received no pay for months at a time.

“A small band of quasi-financial institutions has been systematically taking control of the country,” reported Paul Tooher, national/foreign editor of the Providence Journal Bulletin, in 1998, returning from a year-long media project in Russia, “gaining control of, and selling off, its natural resources, buying

up the media to wage war against its financial challengers and seize control of the main levers of government....

“A privileged few have become fabulously rich and are willing to do whatever it takes to retain and increase their wealth....National resources [such as] a nickel mine in the Urals, a third of the nation’s oil refining capacity [and] an interest in the nation’s telephone system have been sold off under questionable auction procedures to a select group of Russian financial interests.”158

All this made some Russians nostalgic for the days of Communism—even under Stalin’s repressive regime. In my view, the problem was that Russia, after the collapse of Communism, embraced the extremes of commercialism and capitalism without the protections of government regulation that have set limits on greed in the United States and other Western democracies.


19. SAVING AMERICA FROM GOVERNMENT HEALTH CARE
As the privatization movement rolled on, there was no need to privatize health insurance because, for most Americans, it was already in the hands of private insurers. Even the government’s Medicare program was contracted out regionally for administration by insurance companies. Bucking the tide of privatization, but strongly supported by public opinion according to the polls, the Clinton administration undertook in 1993 to provide universal health care in the United States like the other large industrialized nations.

When health care reform perished in Congress in 1994, there was applause from many commentators and a huge sigh of relief from the insurance and drug companies. Political opponents, ever since, have bragged about rescuing America from a disastrous health care plan. Do you believe America is better off without a national health plan? My own feelings were affected by what I had seen in Great Britain during World War II and years later.

One of the first things that struck me while stationed in England as an American soldier waiting for D-Day was the poor condition of the teeth of so many people. Some of them said they just couldn’t afford dental work. Returning on a visit after the war, when Britain had set up a national health system, I was impressed by the bright smiles I saw everywhere. Of course, other aspects of neglected health had been helped too.

When President Harry Truman proposed a national health plan for the U.S., the American Medical Association fought it, assessing physicians to build a large war chest to fight what they called by the scare label “British socialized medicine.” I was shocked to see AMA literature in my doctor’s office that contained propaganda I knew to be false, because people in England told me they were pleased with their health care, contrary to scare stories from the AMA. Doctors in the British national health service still made house calls.


The popularity of the system endured so that in the 1997 British general election all three major parties agreed they wanted to keep and strengthen their national health system. It should be noted that “the free practice of medicine” still exists in Britain; that is, people who can afford it are allowed to go to doctors privately, but medical care is available to all “free at the point of delivery.”
Failure to appease opponents

President Clinton asked his wife, Hillary, to lead the effort for health care reform, which caused some to admire her and others to vilify both of them. The plan that eventually emerged was not along the lines of the single-payer systems adopted by Canada and European countries but one that attempted to remove objections in advance by letting insurance companies and employers continue to participate, while allowing the states local variation. In the end, the efforts to appease these groups were unavailing and resulted in a plan that was vulnerable to attack for being too complicated.

The President invited bipartisan cooperation, welcoming any bill that would meet the essential requirements, but even his fellow Democrats could not, or would not, agree on either his plan or any of their own, and Congress took no action. Republican promises to come up with their own bill in the next session were never redeemed.

The political muscle of doctors and the health care industry is revealed in a March 1998 Associated Press report of its joint project with the Center for Responsive Politics, the “first complete computerized study of lobbying disclosure reports.” Topping the list of spenders for the first half of 1997 was the American Medical Association, $8,560,000, leading the Chamber of Commerce of the U. S. by more than $1.5 million, and also in the top twenty was the American Hospital Association, $3,390,000. A single pharmaceutical company, Pfizer, was sixth highest at $4,600,000. The AMA had more than two dozen staff lobbyists. These figures, of course, do not include campaign contributions and cover a period after the battle against the Clinton health plan had already been won.159


Ironically, although massive industry propaganda and political contributions killed health care reform, the threatened evils have come anyway. Americans had been told by “Harry and Louise” in the TV ads, and by other fronts for the insurance industry, that:

  • We’d lose our cherished right to select our own doctor.

  • The proposal would create an expensive bureaucracy.

  • Decisions about our health care would be made others than doctors, and people couldn’t get care they need.

  • Seniors would lose some of their Medicare benefits.

  • The plan would place a burden on employers.

  • There was no crisis because the rise in health care costs had slowed down.

Even by 1996 the following assessment could be made:

  • We were rapidly losing our medical choices. Independent medical practices were being bought up by medical corporations that talk about customers rather than patients. Family doctors were being replaced by impersonal “clinics.” And there was a growth of health maintenance organizations (HMOs), where choice of doctors is narrowed.

  • The expensive bureaucracy had arrived—not government but private. The number of health administrators in hospitals multiplied nearly 700% from 129,000 in 1968 to 1,000,000. They grew from 18% to 27% of the health care work force, while doctors and nurses declined from 51% to 43%, according to a study in the American Journal of Public Health. The administrative costs of insurance companies were eating up 20% of all our health care spending.

  • The private insurance company bureaucracy has tightened its grip on medical decisions.

  • Seniors continued to have their Medicare benefits reduced as they were forced to pay more out of pocket for coinsurance and deductibles.

  • Employers who provide medical coverage saw premiums rise rapidly, and many have been switching to HMOs which restrict choice of doctors. The trend also continues for

employers to fill more jobs on a “part-time” no-benefits basis.

  • Health care costs resumed their rapid rise as soon as the reform proposals had been killed, and continued to outpace the consumer price index. A sharp rise in drug prices is evident to all who buy medication.

  • Any family’s health insurance was still in jeopardy whenever corporate downsizing forces a job change.

Subsequent half-measures to let job-changers retain (and pay unlimited premiums for) insurance, and a 1998 proposal by President Clinton to let people get into Medicare early by paying $300 to $400 per month, run into the difficulty that the unemployed seldom can find the money to pay such premiums.

As rising costs imperiled Medicare funding, the Congressmen who had helped kill health care reform proposed to make seniors pay more in deductibles, co-insurance, etc., as well as raise the starting age to 67, thus adding many more people to the rolls of the uninsured. If universal health care had been enacted, Medicare could have been gradually absorbed.

Of course, Congressmen and their families have health insurance and can get free VIP treatment at Walter Reed and Bethesda military hospitals. People on welfare and people in jail get free care, and hospital emergency rooms are swamped with routine cases whose expensive care is added to the bills of paying patients. Rising medical costs threaten budget crises for the federal and state governments, not to mention family budgets.
World champion of health care?

A favorite argument of opponents of reform was that the United States already had the best health care in the world, a claim that seemed somewhat spurious when David Rockefeller and Henry Kissinger used it to persuade President Carter that the Shah of Iran must be admitted to the U.S. for medical treatment (it didn’t save his life, but resulted in the staff of the American embassy in Iran being held hostage for the balance of Carter’s term of office). Although American medicine may be at the cutting edge for those who can afford expensive new treatments, innovations have also come from other nations, even the Soviet

Communists who pioneered the techniques that permit the reattachment of severed limbs.

If it were true that Americans in general receive the best medical treatment in the world, our average life expectancy should be greater than Japan and other nations. Economist Lester Thurow noted: “America is well down in the charts when it comes to every measure of health—life expectancy, morbidity, infant mortality.”52,53 The World Health Organization (WHO) reported February 14, 1997, that life expectancies for men were 72 years in the United States but over 75 in Greece, Switzerland, Sweden, Israel, Australia, and Japan. Women’s life expectancies were 79 years in the U.S. but 82 in Australia, Canada, France, Japan, Spain, and Switzerland.160

Anthropology Professor Barry Bogin concluded from a 25-year special study that we may be able to “use the average of any group of people as a barometer of the health of their society.” He noted that the average height of American men grew from 5’6” in 1850 (then the tallest in the world) to 5’8”, while Dutch men zoomed from 5’4” (shortest in Europe) to 5’10” (now the tallest in the world). His explanation:

“The Dutch decided to provide public health benefits to all the public, including the poor. In the United States, meanwhile, improved health is enjoyed most by those who can afford it. The poor often lack adequate housing, sanitation, and health care. The difference in our two societies can be seen at birth: in 1990 only 4% of Dutch babies were born at low birth weight, compared with 7% in the United States....”161


Can the U.S. afford universal health care?

Another fallacy advanced by opponents was that America could not afford universal health care, even though all the other advanced nations have it. Opponents cited the cost of covering those who can’t afford insurance, but taxpayers are already paying for the poor, criminals in jail, and politicians at all levels. They are also paying large amounts that are excluded or deducted from existing insurance payments.

Many of the “costs” of the proposed plan to cover presently uninsured people are not new costs but already being paid by the public via Medicaid and hospital “overhead” for non-paying patients. The use of expensive emergency room facilities by the poor for ordinary illnesses, for example, is one of the most wasteful aspects of the present system.

Universal coverage of health care would permit Medicaid, which pays for medical needs of the poor (not to be confused with Medicare financed by Social Security funds), to be abolished. People helped off welfare by replacing Medicaid with health coverage at work could be paying taxes and health insurance premiums.

For all these reasons, it is possible that true reform might result in no extra cost when all factors are considered. The highest estimate I saw of additional cost was $100 billion, and this included hypothetical indirect costs to others than the federal government. Congress, that couldn’t agree on health care, did agree in quiet bipartisan cooperation to spend hundreds of billions of taxpayers’ money for S&L bailouts.162

Neither the Clinton administration nor the lobbyists against the Clinton plan gave any serious consideration to the simpler, less costly system, the single-payer system which is used in virtually all the civilized countries that have had national health systems for many years, but which would have imperiled the profits of insurance companies. It was favored by some members of Congress and the American College of Surgeons, which said it would reduce bureaucracy more than any other health-reform proposal as well as preserve choice for patients and physicians.163 The 1,500 insurance companies whose administrative costs eat up 20% of all our health care spending would, of course, label a single-payer system “socialism.”

The public can take little comfort in the poetic justice inflicted on doctors by the insurance companies and HMOs after they helped kill universal health insurance. The doctors first, through the AMA, killed Truman’s national health plan. They fought hard but unsuccessfully against Medicare, then learned to use it to their advantage. Finally, they teamed up with the

insurance companies and pharmaceutical industry to kill the 1994 proposals.

I can remember when doctors made house calls, family doctors could read an X-ray, hospitals never turned away a patient for financial reasons, and people entered the medical profession because they wanted to heal people. Of course, there are many doctors today whose primary motivation is service, and others who are at least partly motivated to relieve suffering, but the prospect of making big money enters into the choice of occupation too much today.

In the 1970s, when I worked in financial communications, I remember well that medical companies became hot issues in the stock market, such as chains of proprietary (profit-making) hospitals, nursing homes, medical labs, and, of course, drug companies. At the same time, doctors were becoming so much more prosperous that they came to head the lists of prospects for anyone selling luxury homes, yachts, and investments in commercial real estate and other tax shelters. Is it any wonder medical costs have risen faster than the consumer price index?

Private insurance and Medicare have benefited doctors (and the hospitals they control) more than the public. Instead of the old situation where much charity work was done by doctors and hospitals, they now collect from Medicare and/or private insurance and often bill middle-class patients extra, while being paid by Medicaid for treating the poor.

The total public and private spending on health care, having grown twice as fast as the CPI from the mid-1980s to the mid-1990s, reached 12% of GNP without universal coverage, exceeding other advanced nations that have single-payer national health systems. This suggests reform could be afforded better than inaction. The prospects for reform, however, are not good.



20. WHY UNEMPLOYMENT EXISTS
Is there something wrong with the economic measure that is most important to many people—the rate of unemployment? It would be reasonable for the public to think that the official rate counts all the jobless, but that doesn’t happen to be true. As the public learned about “downsizing” in the 1980s and 1990s with many thousands of employees being laid off by each company affected, it became hard to understand why the official unemployment figures didn’t show huge increases. The answer lies in the definition the government uses.

The narrowness of official figures is acknowledged by the U.S. Bureau of Labor Statistics (BLS) in its booklet, How the Government Measures Unemployment: “Unemployment statistics are intended to provide counts of unused, available [labor] resources. They are not measures of the number of persons who are suffering economic hardship.”

The BLS gets its statistics from a random survey of 60,000 households. Anyone who says he or she is working, or has worked at all—even one day—during the month, is counted as employed. Someone who works part time but wants to work full time is counted as employed. To be counted as unemployed one must have reported looking for work during the past month. Otherwise, that person is not counted as unemployed but is considered out of the labor force. An economics textbook by Stephen L. Slavin in 1991 estimated that only about half of all unemployed Americans were collecting unemployment insurance benefits.164

Economist Lester Thurow of MIT explained it in an article published in the March-April 1996 issue of The American Prospect. He estimated there were 5 million to 6 million jobless people not meeting the tests of the official definition for unemployment and 4.5 million part-time workers who would like full-time work. Adding these to the 7.5 million to 8 million officially unemployed workers, he counted 17 million to 18.5

million Americans looking for more work, or a real unemployment rate of almost 14%.

Thurow also counted 18 million contingent workers accounting for another 14% of the workforce: 8.1 million in temporary jobs, 2 million working “on call,” and 8.3 million “self-employed” with few clients but too much pride to admit being unemployed, most of them looking for more work and better jobs. In addition, he cited 5.8 million males 25 to 60 years of age (another 4% of the workforce) in the census statistics but not counted as either employed or unemployed, some being among the homeless. “In the aggregate,” he wrote, “about one-third of the American workforce is potentially looking for more work than they now have.”165

The Organization for Economic Cooperation and Development (OECD) has determined a “coverage rate” of the unemployed in the U.S. and Europe by comparing, in each country, the number of unemployed people who receive benefits to the total number unemployed. The 1994 OECD Jobs Study found the coverage rate to be 98% in France, 89% in Sweden, and 93% in Germany, while the U.S., at 34%, was in the neighborhood of Greece (30%) and Portugal (36%).

Another difference among countries has been pointed out by Harvard economist Richard Freeman. Many people in America are in jail instead of being unemployed in the labor force. With imprisonment in the U.S. running roughly ten times the European rate, the number of U.S. men incarcerated in 1993 was almost 2% of the total number of men in the labor market, and another 2% of the nation’s full-time employment was made up of police, judges, prison guards, and related jobs for handling them at a cost of around $100 billion annually.166


The natural rate of unemployment or NAIRU

Often polls have shown that jobs are the main concern of the public. For example, the AP poll in mid-December 1995 found the public considered jobs and the economy (26%) to be the most important issue, followed by education (18%), and health care (16%). People are rightly suspicious of official

unemployment figures and puzzled by statements by some economists that 6% is “normal.”

Anyone who thinks about it will realize that zero unemployment would be impractical. Most workers leaving one job, voluntarily or otherwise, will not immediately step into another one, unless they have lined it up ahead of time. The more specialized their skills and the more particular their requirements about location, work schedule, etc., the longer may be the time required to find employment opportunities that are a good match.

This searching time, and other frictions in the labor market, result in some percentage of unemployment that is irreducible without extreme measures that would result in inflation. This is sometimes called the “natural rate of unemployment,” but there is no general agreement on what the percentage is. Later it became more fashionable to use the acronym “NAIRU“ which stands for “nonaccelerating inflation rate of unemployment.”

George P. Brockway commented in a 1985 book: “People today argue over whether full employment is reached with 6% or more unemployed. Seldom is the figure any longer set as low as 4% (which is what economists used to have in mind).”167

Eisner described this hypothetical rate as “pernicious.” He said its devotees “may think that in our perfect market economy whatever is must be optimal and natural....But I will maintain that involuntary unemployment due to a lack of aggregate demand or purchasing power is a fundamental fact of our economy.”

Unemployment fell in February 1998 to the 24-year low of 4.6%, after the FRB passed up several opportunities to raise interest rates and inflation remained low. Before 1998 the last extended period of low unemployment was from 1965 through 1969, when it ranged from 3.5% to 4.5%. The unemployment rate rose to 8.5% in 1975 after the Arab oil embargo and remained above 5% for the next twenty years, reaching peaks of 9.7% in 1982 and 7.4% in 1992, and dropping to 5.4% in 1996. Within those annual average rates, the month of December 1982 had the highest unemployment since 1940 at 10.8%.


The issue became controversial within the Clinton administration over the question of welfare reform, intended to prevent people from making welfare a permanent way of life. Labor Secretary Robert Reich recalled the dilemma in the White House over what to do with welfare recipients who still can’t secure a real job after doing everything asked of them. The “bleeding-heart old liberals” would keep them on the welfare rolls, he wrote, while the “tough-love New Democrats” argued for a strict cut-off point.

Noting that most of the President’s economic advisors would accept eight million unemployed “in order to soothe the bond market and prevent even a tiny increase in inflation” while his “tough-love” welfare advisors assumed jobs would be available for all welfare recipients, Reich declared: “If at least eight million people have to be unemployed and actively seeking work in order to keep inflation at bay, the additional four million on welfare simply won’t get jobs.”168


Unemployment and laziness

A president of the American Economic Association, Franco Modigliani, declared in his presidential address that the natural-rate-of-unemployment hypothesis implied the sharp drop in employment of depressions and recessions was due to “epidemics of contagious laziness.”169 This remark parodied the attitudes of those who treat a considerable amount of unemployment as normal and who cling to the idea that market equilibrium assures jobs to those who really want them.

The more secure one’s job, the more likely one is to blame poverty on idleness. Such an attitude was characteristic of Victorian times and carried over into the 1930s. Republicans with jobs were bitter in their sneers at FDR, “That Man in the White House,” and the men he put to work in the WPA. They were depicted by editors as leaf-raking and in cartoons as leaning on their shovels. Being poor was treated as a sin.

“Oddly enough,” Robert C. Lieberman of Columbia University has observed, “all of this moral weakness vanished a decade later when the postwar boom produced an era of full employment. The indolent poor of the 1930s became the blue-

collar middle class of the 1940s and 1950s. Evidently, they were all-too-willing to work hard for decent wages. What was missing in the 1930s, it turned out, were not virtues but jobs.”170

A classic example of misunderstanding the problem is Marie Antoinette’s exclamation, “Let them eat cake!” when she was told the poor of Paris had no bread. The modern day counterpart is heard from many self-described conservatives who proclaim, “Let them work!” as a solution for mothers on welfare and people on Social Security. It has little relation to the real world, as most job applicants have learned from experience.

Jobs are supposed to appear miraculously according to Say’s Law, a pillar of classical economics attributed to French economist Jean-Baptiste Say (1767-1832), sometimes stated as “supply creates its own demand.” That is, the income generated from any level of production would finance demand equal to the supply resulting from the production. Therefore, a “universal glut”—that is, a depression—would be impossible. However plausible the theoretical logic of Say’s Law, the worldwide depression of the 1930s proved it wrong in practice.

Some of the most sensible explanatory writings during that Great Depression were by Stuart Chase of the Twentieth Century Fund. In his book, The Economy of Abundance (1934), he wrote that his title referred to “a condition never obtaining anywhere until within the last few years” which he felt occurred about 1900 and defined as “an economic condition where an abundance of material goods can be produced for the entire population of a given community.”171

Chase asked, rhetorically, “Why cannot markets expand, and so keep capitalism afloat indefinitely?” His answer was that capitalism supplies goods “only if enough money is forthcoming...to cover all costs of production including interest, plus a margin of profit....The ten million unemployed in this country today [January, 1934] would gladly take a volume of goods which would make factory wheels hum. The factory wheels are silent because the unemployed have no money.”

Chase went on to observe that production could keep on rolling if somehow people could be provided with cash, but that is inflation (“more feared—see almost any editorial in 1933—than

loss of markets”) if people are equipped with money outside the rules of the game. The gist of his ten “rules of the game” is that private bankers control the supply of money, manufacturing it by issuing business loans and crediting checking accounts.

“Private bankers cry to high heaven,” Chase observed, “when the government proposes to create some money of its own against, let us say, public works. Why is this more reprehensible than creating money against a shoddily built apartment house which may never be rented?” In the rules of the game, the bulk of “unearned income” is not spent but reinvested, which naturally requires finding something profitable to invest in. To produce consumer goods, investment must first be made in capital goods. If the capital goods sector has developed its plants and processes to a point where no further profitable opportunities are offered, savings will not flow into it. “Capitalism officially ends when the flywheel—the production of capital goods—ceases permanently to turn over at its accustomed compound interest rate.”172


The Keynesian revolution

Despite Stuart Chase and some others, the idea that government could do anything about unemployment (and business cycles in general) did not catch on until the publication of a landmark book, The General Theory of Employment, Interest, and Money, by British economist John Maynard Keynes in 1936. Before that the prevailing belief was that the cycles of boom and bust were inevitable, and that anything government might do would be harmful rather than helpful to the necessary adjustment of the economy.

Conventional wisdom held that business cycles must run their course, but Keynesian policies inspired governments around the world to work for full employment. The first sentence in Keynes’ final chapter stated: “The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes.”173

Keynes founded economic principles that have been credited with making the Great Depression of the 1930s the last. Keynes’ approach called first for stimulating a slow economy by

government outlays and tax reductions that would cause a deficit, and second for offsetting that deficit by reduced outlays and/or higher taxes during a boom to pay off debt and restrain inflation. The application of these methods is called “fiscal policy.” Keynes’ answer to monetarists, who prefer “monetary policy” and claim the economy can be stimulated by reducing interest rates, was that their method was like trying to push a rope.

In the U.S. some steps were taken by government to combat the depression even before publication of the Keynes masterpiece in 1936. Unlike President Herbert Hoover, who said “prosperity is just around the corner” and waited for the economy to heal itself under classical theory, President Franklin D. Roosevelt took bold actions.

Although the Supreme Court thwarted various of his attempts by ruling them unconstitutional, FDR maneuvered to put many of the unemployed back to work. His policies resulted in building thousands of schools, libraries, hospitals, post offices, public housing units, etc., electrification of farms, highway construction, improvement of public lands, and production of artistic, historical, and literary works, all through government programs that enabled millions of men and women to do useful work.

“The extent of these contributions is obscured,” wrote George P. Brockway (1985), “by the statistical quirk whereby those who worked for the WPA, CCC, NYA, and the rest of the so-called alphabet soup are evidently counted as unemployed.” He added that the cost of the program was not substantially greater than the cost of inaction. “The budget deficit in 1932, the last Hoover year, was $2.7 billion, while in 1940, the last pre-war year, it was $3.1 billion.”

Brockway quoted Keynes, “Pyramid-building, earth-quakes, even wars may serve to increase wealth, if the education of our statesmen on the principles of classical economics stands in the way of anything better....It would, indeed, be more sensible to build homes and the like.”174 After FDR, other administrations used Keynesian fiscal policies to stimulate production and employment, somewhat enthusiastically under Democratic

presidents and congresses and more reluctantly under Republicans.


Carter’s bad luck

The last such effort on a major scale was in the administration of Jimmy Carter, who recalled in his memoirs: “Joblessness was our most pressing economic problem. More than eight million Americans were unemployed and the creation of jobs was a top priority for me....By the end of four years about 10 million new full-time jobs had been created, less than 10% of which involved employment in government....Although the budget costs of these [job training and public service] programs were substantial, the net cost...was quite small because people who worked stopped receiving welfare and unemployment-compensation payments.”175

Despite the job creation cited by Carter, which brought the official unemployment rate down from 7.7% under Ford in 1976 to 5.8% in 1979, the rate was up again to 7.1% in 1980, the year Carter lost his reelection bid and was replaced by Reagan. Carter’s defeat was partly due to the hostage crisis in Iran and the failure of either the military rescue mission or negotiation to secure their release, but that was not all. It was his further bad luck that OPEC, which had caused worldwide inflation and recession by quadrupling the price of oil in 1973, sent another shock in 1979 for a repeat performance that caused rapid inflation (up 11.3% in 1979 and 13.5% in 1980). The monetarists blamed the inflation not on OPEC but on Keynesian economics.

This was the last time Keynesian fiscal policy was used consciously to stimulate the economy, although the deficit spending of the 1980s, largely for the Cold War, had an expansionary effect, while political rhetoric was claiming reduced spending.



21. DOWNSIZING AND DOWNGRADING
Not only do official statistics present too rosy a picture of unemployment, but also other recent problems have been underplayed. Workers’ problems since the mid-1970s have included deterioration in working conditions, especially longer hours, lower wages, and loss of fringe benefits, while jobs have become more insecure. At the same time labor unions have declined in membership and have been forced to make unusual concessions to employers. Attacks on the unions were aided by the Taft-Hartley Act of 1947, taking away some of the power given to labor unions by Roosevelt’s National Labor Relations Act. Waves of strikes that seriously inconvenienced the general public, as well as the penetration of some unions by mob racketeers, had built up sentiment against the unions.

Such strength as the union movement had in 1981, when Reagan took office, was seriously undermined by his treatment of the air controllers and their union when they struck over work pressures they considered a threat to air safety. He fired them all and banned them forever from working for any agency of the federal government. That was an example, of course, that private employers were happy to follow, and it was an action that intimidated labor unions, especially those whose members were government employees.

Economist Lester Thurow declared: “President Reagan’s firing of all of America’s unionized air traffic controllers legitimized a deliberate strategy of de-unionization. In the private sector, consultants were hired who specialized in getting rid of unions, decertification elections were forced, and legal requirements to respect union rights were simply ignored—firms simply paid the small fines that labor law violations brought and continued to violate the law. The strategy succeeded in shrinking union membership to slightly more than 10% of the private workforce (15% of the total workforce).”176

The memoirs of Secretary of Labor Robert Reich contain this note, dated Feb. 13, 1993: “The AFL-CIO is dying a quiet

death and has been doing so for years. In the 1950s, about 35% of American workers in the private sector belonged to a union. Now membership is down to about 11%, and every year the percentage drops a bit further....”177

Unions have continued to weaken, seldom getting support from the National Labor Relations Board (NLRB), and facing employers’ threats to close plants unless workers accept their demands for lower pay, longer hours, etc. These threats were not empty. Many companies moved their production to low-wage, non-union plants overseas, sometimes with the help of federal subsidies. Labor unions can no longer be considered a powerful force in national life.


Strange disappearance of the affluent society

Anyone who is old enough can remember a popular topic of discussion in the 1960s and 1970s was the affluent society, and the national problem of how people could make good use of their newly found leisure time. A few years later that idea took on a bizarre ring, as Americans found they were working longer hours for less pay than many Europeans.

John Kenneth Galbraith gave the title, The Affluent Society, to his 1958 book, since revised and reissued several times. The beginning of the 20th century having been picked by Stuart Chase in 1934 as the time when it became possible to produce enough material goods for all, Galbraith saw the “affluent society” as the next step, where maximizing production was no longer the major goal. “In a society of high and increasing affluence,” he wrote, “individuals...will work fewer hours or days in the week. Or they will work less hard. Or...it may be that fewer people will work all the time.”

He pointed out that the small, idle leisure class of earlier times had been replaced by a much larger “New Class” consisting of workers such as business executives and scientists who would be insulted by the suggestion that their principal motivation in life is pay received. “No aristocrat ever contemplated the loss of feudal privileges with more sorrow than a member of this class would regard his descent into ordinary labor where the reward was only the pay,” he wrote.

He remarked on the growth of the New Class in the U.S. from not more than a few thousand individuals in the 1850s to millions whose primary identification is with their job rather than the income it returns. Since the last century, he noted, the average work week declined from an estimate of nearly 70 hours in 1850 to a 40 hour normal work week a century and a quarter later.178

The trend celebrated by Galbraith has been reversed in the final quarter of the 20th century. Instead of working fewer hours or days in the week or less hard, as he predicted, some people are working overtime, some are working several jobs, and some are working temporary and part-time jobs without benefits because they have to, while others who want to work are denied the opportunity, often with the cruel excuse, “You are overqualified.”

Ironically, these harmful results have been accelerated by U.S. policies: tax laws have rewarded corporations for moving operations outside the country, foreign aid has encouraged other countries to compete with U.S. industries, and international agencies such as the World Bank and the International Monetary Fund (IMF) to which the U.S. contributes have offered financial incentives for less developed countries to shift from self-sufficient farming and local industries toward factory production for export.

American corporations have also shortsightedly contributed to U.S. economic decline by selling or revealing advanced technology to foreign competitors. In three years 1986-88 alone U.S. companies sold roughly $5.6 billion of technology to Japanese corporations. During the 1980s U.S. corporations sold more than $225 billion of their technology to foreign competitors.

A 1990 book by Florida and Kenney stated: “A recent survey of leading electronics corporations by Ernst & Young [reported] 72% of companies with revenues in excess of $300 million and...61%... between $100 and $300 million have manufacturing plants located offshore....This reality remains hidden from many Americans, because so many of the final products bear American names....But...most of the jobs and manufacturing wealth is created outside the US....

“We have fallen so far off the cutting edge of semiconductor facility construction that an increasing share of new American semiconductor fabrication plants, including IBM’s

new advanced chip facility in East Fishkill, are being built by Japanese companies....” The authors quoted James Koford of LSI Logic: “...We sell our innovations and get a one-shot infusion of capital, not a continuous product stream....”

The use of foreign contractors, in addition to outright sales of technology, also aids foreign competition. Subcontractors learn from blueprints, product specifications, machinery, and even engineers supplied by the American firms for setup and quality control. Florida and Kenny declare that “U.S. high-technology firms...are now being forced to establish manufacturing partnerships with Japanese corporations to gain access to state-of-the-art Japanese production technology and management techniques....”

In 1988 the top three companies obtaining U.S. patents were all Japanese. The only American companies in the top 10 were General Electric and IBM.179 Haynes Johnson (1991) quoted an explanation by Howard I. Podell, a registered patent agent and successful inventor from Tucson, Arizona: “Companies these days are run by business school graduates who are profit-oriented, not product-oriented....U.S. inventors have had to go abroad [as he had done] to patent their products.”180


Lack of unions lures industry

As companies seek to cut costs, they use plant moves or the threat of such moves to thwart labor union efforts for higher wages or better conditions. A prime motive for owners to move most of the New England textile plants to Southern states was avoidance of unions. The same motive is involved in moving many of those same plants outside the U.S. to countries where governments and the police are unfriendly to unions.

High-technology workers now face the same threat. When Atari’s California plant with some 2,500 workers was on the verge of unionization, the company moved its production to Taiwan and Hong Kong. Although a National Labor Relations Board suit eventually brought an out-of-court settlement, the plant and the jobs were gone. This story has occurred over and over again in various industries.181

Breaking the unions helps corporations become more competitive on the global scene. So does escaping from health, safety, and environmental regulation. World business leaders, in a March 1996 survey, rated the U.S. economy as the most competitive among industrialized nations, immediately followed by Singapore and Japan. Other countries in the top ten include Malaysia and Hong Kong.

When business leaders say “most competitive” they mean low wages, few worker benefits, and deregulation. Manufacturing labor costs per hour in 1994 averaged $17.10 in the U.S., $27.31 in Germany, and $21.42 in Japan, according to the Bureau of Labor Statistics. Americans put in more working hours during an average year (1,847) than workers in Britain (1,622 hours), France (1,619), Sweden (1,569) and Germany (1,419). In no country other than the U.S. do CEOs of corporations make 150 times the income of workers on the shop floor.182

22. OLD THEORIES IN NEW CLOTHING


Monetarists and neoclassical economists delightedly (and prematurely) declared the end of Keynesianism when economists of the dominant Keynesian school found it hard to explain the simultaneous combination of inflation and unemployment in the late 1970s. According to a theory developed by British economist A. W. Phillips, the rates of unemployment and of inflation were supposed to move in opposite directions, and the data for the years of the 1960s could be fitted very neatly to a curve (the Phillips curve) showing this inverse relationship. When this broke down in the 1970s, the unhappy combination of unemployment and inflation was dubbed “stagflation.”

Keynesian principles, which had prevailed for about 50 years, had rescued the world from the boom-and-bust business cycles that peaked in the 1929 stock market crash and the 1930s Great Depression. With the arrival of “stagflation” in the 1970s, rival economic theories emerged. The news media reported these ideas as new, seldom acknowledging the fact that they were merely retreads of the disproven theories from the era of Presidents Harding, Coolidge, and Hoover.

The election of President Ronald Reagan in 1980 provided a splendid opportunity for the anti-Keynesian economists. Chief among them was Milton Friedman of the University of Chicago, where a large body of professors and their graduate students exerted an enormous influence on other economists and government officials around the world. This movement was not wholly a spontaneous scholarly effort. Financial support from business interests to universities and research foundations encouraged studies justifying corporate freedom versus government action.

In his 1997 book, Everything for Sale, Kuttner discussed why “press accounts of economic issues repeat, mindlessly, truisms about the superiority of laissez-faire”—the classical Chicago School doctrine. “Much of the responsibility,” he opined, “rests with the economics profession. Even among the most

heterodox economists, especially those wishing to retain their standing in the neoclassical church, there remains an almost intuitive reverence for markets and a skepticism of state intervention.”

Discussing extensions of the neoclassical market model to legal and political procedures (in the Law and Economics movement and the Public Choice doctrine), Kuttner wondered why theories “so extreme and tautological” were taken seriously in the academic world. He concluded that perhaps most importantly they “are very reinforcing of the laissez-faire ideal and thus very congenial to society’s most powerful,” noting that “conservative foundations have spent tens of millions of dollars subsidizing research by sympathetic academicians with the premise that their work will help propagate this faith.”183

Such foundations were joined by corporations in underwriting all-expenses-paid institutes and seminars at resort locations where some 600 federal judges have been exposed to the Law and Economics arguments, possibly violating the Judicial Code of Conduct prohibition of judges accepting gifts. They were encouraged to favor common law over enacted laws and administrative regulations. Later, the movement reversed position to support legislative limits on damages awarded in courts. Conservative foundations have also spent millions of dollars, according to Kuttner, endowing chairs to propagate these views, “and law schools, bending the usual rules that appointment decisions are not influenced by benefactors, have gratefully accepted the money.”184

Friedman‘s theories seduced Margaret Thatcher in the U.K. and then Ronald Reagan in the U.S. The “new” Reaganomics was really a revival of old pre-Keynesian theories. In the 1980 Republican primary campaign, George Bush denounced Reagan’s proposals as “voodoo economics.” When offered the vice-presidential spot on the Reagan ticket, his attitude changed and thereafter he praised what he had first condemned.

The reactionary economic movement disguised the old discredited classical economics as new with such terms as “neoclassical,” “monetarist,” and “supply-side.” An innovation to some degree was the “rational expectations“ theory, which was

sound in predicting that investors would act on their beliefs about what government would do in fiscal and monetary policies, but went too far in claiming this made it useless for the government to do anything about the economy.


Vindication of Keynes

The flaws that Keynes had found in classical economic theory did not magically disappear, nor did his principles fail to operate in the 1980s, when monetarists declared Keynesian economics obsolete. Volcker and the FRB continued to tightened the screws and brought down the rate of inflation by 1982, but unemployment was at its worst since 1940 and inflation-adjusted GNP actually declined. This was monetarist policy, of course, but Keynesians never doubted that tight money could stall the economy. They just didn’t believe that relaxing it would jump-start the economy.

The recovery that began from the depths of 1982, proudly hailed by Republicans as the longest-lasting recovery in history until then, was fueled by government spending (and purportedly by tax reduction) in accordance with Keynesian fiscal policy. The increases in military spending greatly offset the trumpeted reductions in social spending.

Even the Phillips curve took on new life. When the unemployment and inflation rates for 1985-96 are plotted on a Phillips graph, they follow the shape of the expected inverse curve fairly well. The significance of this pattern might be suspect, given the incompleteness of the official unemployment rate, but if that rate tends to vary during the period measured in line with changes in the total jobless rate, it could serve as a rough proxy for the latter. The stagflation phenomenon, in retrospect, seems limited to the years immediately following the OPEC shocks of 1973 and 1979.

Wherever the “new” economic theories from the past were tried, as in Britain, America, and Chile, the rich became richer at the expense of everyone else, unemployment spread, and government debt skyrocketed. Yet the proponents continued to argue that everyone benefits from reducing upper-bracket taxes and deregulating corporations.
The economic miracle in Chile

American business magazines and news services were ecstatic in praising what they called “Chile’s economic miracle” under the guidance of Milton Friedman and his associates from the University of Chicago for about 15 years until 1990, when the military dictatorship was replaced by Patricio Aylwin, the first democratically elected president in 17 years.

There had been a coup in 1973 in which the Chilean military, with the help of ITT and the CIA, overthrew the democratically elected government of President Salvador Allende Gossens, a socialist, assassinating him and thousands of his followers. After nearly two years, Friedman’s disciples succeeded in selling the military regime on their doctrine and received extraordinary powers to impose their will on Chile’s economy.

Under military dictator General Augusto Pinochet, the “Chicago Boys” produced impressive macro-economic statistics at horrendous human and environmental cost, according to a 1995 book by Joseph Collins and John Lear. By 1990, Chile had relatively low inflation, strong economic growth, high levels of foreign investment, and an export boom, all of which had been extravagantly acclaimed in the press. As good as these results sound, however, Chile’s “miracles” are actually recoveries from severe recessions in 1975 and 1982.

The Chicago “reforms” included deregulation of industry, tariff reduction, and clearing the way for foreign investment. They also auctioned off government-owned enterprises at a fraction of their value, ended price control of basic necessities, and privatized many important government services. More accurately described as disaster than miracle was the rise in poverty from 20% to 41% between 1970 and 1990, inadequate housing from 27% to 40% 1972-1988, and foreign debt from $5 billion to $21 billion, one of the world’s highest per capita.

Contradicting their own free-market principles, the “Chicago boys” and Pinochet socialized $16 billion in bad debt, most of it borrowed by private industry, and kept the armed forces in government health and pension programs while civilians were left to the mercy of private providers. They also balanced the

budget by selling off government assets to multinationals and to relatives and cronies of the Pinochet regime at about half their value. Corporations bought outstanding Chilean loans for 30% of their face value from international banks and were able to apply 100% to the purchase of the state enterprises.

The telephone and utility monopolies were sold free of any regulation, and electricity and telephone rates outstripped inflation by 45% and 64% respectively between 1981 and 1985. Pinochet sold off government saw mill operations and permitted export of low-value raw logs and wood chips. Private conglomerates were allowed to devastate extensive reforestation projects of Monterey pine that the pre-Pinochet Chilean government had been growing for 16 to 20 years.

Even in the best years of the new policies, unemployment was 18%. The Labor Code of 1979 strengthened rights of employers against workers. In the 1982 recession some employers declared bankruptcy, laid off senior workers, and rehired them at entry-level wages, while many employers stopped contributing to pension and health programs after they were privatized.

The 1980s increased the share of national income of the top 10% of Chileans from 37% to 47%, and reduced that of the middle class from 23% to 18%. Collins and Lear declared: “The Chicago Boys’ policies were a declaration of total class war that only appear to be a miracle to the ruling elite or to the ignorant.”185


1   2   3   4   5   6   7   8   9   10   ...   15


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page