|CLASSIC CASE IN
by Gregory E. Pence, pg 10-25
PART TWO: ETHICAL THEORIES AND MEDICAL ETHICS:
A HISTORICAL OVERVIEW
The Greeks and the Virtues
The teaching of the major ancient Greek philosophers – Socrates, Plato, and Aristotle – as well as the general culture of fifth-century (B.C.E.) Athens – advocated virtue ethics, the ethical theory that emphasizes acquiring good traits of character. Virtue theory applied to medicine emphasizes creating physicians with such traits.
Our English word ethics derives from the Greek ethos, meaning “disposition” or “character.” Ethos was an inseparable part of the Greek phrase ethike aretai (literally “skills of character”). The Greek word arete means at once “excellence,” “good,” and “skill.” Our modern “ethics” builds on, but differs from, ethike aretai because two millennia of later theories of ethics built other meanings onto the original concept.
From at least as early as the time of Homer (sometime from eighth- to sixth-century B.C.E.), presocratic Greek ethics emphasized ethike arete in performing a role well. That is to say, the scope of ethical inquiry was limited to the roles one fulfilled. If one wanted to know about ethics, one asked about the traits of a good soldier, physician, mother, or ruler. For example, one would ask, “What is the goal of being a soldier?” Answer: “To defend one’s country.” Then one asks, “What excellences are needed to defend one’s country?” Answer: “Physical strength, courage, skill in using weapons, organization in fighting in groups, temperance, and cunning.”
Such ethics were teleological. In other words, they assumed that things developed towards a natural goal. In Greek medicine, if we want to know what makes a good physician, we need to know the purpose of medicine. That purpose is to heal the sick. What virtues are needed to do so? Answer: compassion, knowledge of healing, and skill in human relations.
Role-defined ethics remain powerful today and are the basis on which more universal principles build. For example, medical students first try to live by virtues of that role.
Socrates, Plato, and Aristotle, in a combined move of ethical genius, attempted to transcend role-defined ethics and to argue that there were distinctive ethika aretai of a good person. What are they? In their view, they were the cardinal (primary) virtues of courage, temperance, wisdom, and justice (in dealing with people). These are the distinctive excellences necessary to function best in human society.
The implication of this view for medical ethics is that moral inquiry must not only ask, “What virtures should a good physician possess?” but also, “What virtues should a good person possess who happens to be a physician?” The narrow questions is, “What should a good physician do?” The broader question is, “What should a good person do?”
Not all physicians in ancient times agreed about the role of a good physician, and here looms one of the great divides in medical ethics. Hippocrates and his brethren adopted not only a patient-centered ethics but also a sanctity-of-all-life worldview, holding that physicians should neither perform abortions nor assist in euthanasia of any kind. But most ancient Greek physicians took a naturalistic approach that was a precursor to the scientific worldview. In other words, they advocated forming conclusions based on what one could see and feel. These physicians did not practice medicine based on assumptions about gods and goddesses or about an afterlife, so they were more oriented to helping patients in the here-and-now. Accordingly, they often helped terminally ill patients to die. Most such Greek physicians adopted a quality-of-life view, believing that it was futile to maintain a life of pain and suffering that had little chance of amelioration. It is unclear whether their aid was role-defined, or whether it stemmed from compassion. In either case, the majority of naturalistic physicians used their factual knowledge
and technical skills for very different evaluative ends than their Hippocratic counterparts.
Christian Ethics, Christian Virtues
By the fourth century C.E., Christianity had added its theological virtues of faith,
hope, and charity to the list of human virtues. The paradigmatic virtue of compassion (charity) that many today associate with a good physician comes in part from Christianity’s emphasis on helping others. The etymological root of “compassion” means to “to suffer with,” as Jesus of Nazareth is held by Christians to have suffered with, and for, humans on the cross.
Here we have two differences of emphasis that later came to be fused. Where
naturalistic physicians emphasized technical competence in curing disease, religious physicians emphasized compassion in being with patients. When the limits of technical competence had been reached – as they were often reached very soon during these centuries – compassion became the supreme virtue. Both traditions contributed to today’s definition of good physicians: Every patient wants a physician who is both knowledgeable and merciful.
Virtue ethics in medicine also underlies the apprentice system of medical education, in which young medical students gradually assume more responsibility by assisting older physicians in treating patients. The attending physician teaches the resident, who teaches the intern, who teaches the third-year student. What is taught, theoretically, is not only how to perform a procedure but also how to be compassionate, wise, courageous, and patient-centered.
What would virtue ethics say about a particular issue in medical ethics? The general answer is that with every new case, the physician-in-training should imitate the reasoning and empathy of good physicians. Thus confronted with a 14-year-old patient who refuses to eat after being partially paralyzed after an auto accident, most experienced physicians are likely to say, “Let’s work with him until he’s of legal age, then he can decide for himself. By that time, he’ll probably find a reason to live.”
It should be emphasized that Socratic virtues also celebrated an elitist, anti-democratic ethics that scorned the ordinary person and his worth. The Greeks believed themselves superior to all the peoples they had conquered. Aristotle’s student, Alexander the Great, attempted to instill Greek values, culture, and language in everyone, and he had no tolerance for the cultures of other, “inferior” peoples. The Greek ethics that Alexander inherited was perfectionistic, aristocratic, and meritocratic. In this sense, the quality-of-life attitude of ancient Greek physicians was elitist and perfectionistic, whereas the sanctity-of-life ethic of Hippocratic physicians was much less so.
In contrast to Greek elitism, the three great religions of the West emphasize duties to the poor and sick: The rabbinic ethics of Bar Hillel stress acts that help one’s fellow man; Jesus says that as you treat the poor, so you treat Him; and Mohammed made the zakat, the tax on property for the poor, one of the pillars of Islam. So for a Jew, Christian, or Moslem, a good physician is first a Jew, Christian, or Moslem, and second a physician.
As such, a good Christian physician must care for the poor as part of his duties as a physician. To put this point in more religious terms, the physician’s license, knowledge, and wisdom is not a proprietary right to make money but an instrument of a higher calling from God. In the movie, Chariots of Fire, the Presbyterian Olympic runner says, “I run not for me but to glorify the Lord” and for this refuses to compete on the sabbath. Similarly, to use a medical degree only to money is to abase a degree given in trust for a higher cause.
One area in which the contrast between religious and nonreligious ethics in medicine becomes salient is in thinking about genetics. Greek ethics advocate eugenics (“good birth”). Plato advocated mystery-shrouded mating festivals where those men judged to be “most perfect” would impregnate similar females. For Plato, breeding would be arranged to perfect humanity, not by choice or for love. Just as the Greeks improved the stock of their animals by selective breeding, so Plato wanted to improve humans. Just as the young Greek gentleman should try to perfect his body and life as a work of art, so human society should try to perfect itself by creating better children.
In contrast, the three western religious traditions have preached for centuries that the goal of human life has been either to create a God-based society on earth or to save the most souls for the afterlife. Accordingly, western religions have resisted attempts to tamper with the genes of humans, asserting that humans were created in the image of God and denying that humans should try to perfect themselves through genetics. (In modern times, however, some liberal believers have argued that eliminating genetic disease is not sinful.)
Applying virtue ethics to medical ethics has several limitations. One is that it has little to say about how to make particular, ethical decisions, aside from the injunction to imitate good physicians. Another limitation is that as ethics becomes more role-defined, the less it meets universal standards. Finally, both religious and nonreligious theories of the virtues tend to emphasize the status quo over fundamental, social change. One outcome is that physicians adopting a traditional role tend to be paternalistic, treating patients as children and overruling their decisions.
Natural Law Theory
It has become a truism that when the Romans conquered Greece (in the second century B.C.E.), they themselves were conquered by many aspects of Greek culture. The Stoic philosophers of Roman times elevated one aspect of the Greek worldview to a higher level. Rules for human beings, the Stoics argued, were so embedded in the texture of the world that they were “law” for humans. These came to be known as “natural laws.” They were apprehended by unaided reason, in other words, without Scripture or divine revelation.
Behind the notion of a natural law, of course, is that of a hidden law-giver. In the thirteenth century, Thomas Aquinas synthesized many aspects of Aristotelianism with what had become orthodox teachings of the Christian church. Aquinas made explicit the connection between God and natural laws of the world: A rational god made the world work rationally and gave humans reason to discover his rational, natural laws. Studying ethical theory was a rational process of discovery about the world that revealed rules about how humans should act. Correct descriptions of the world would yield correct prescriptions about how to act. To act rationally was to act morally, which in turn was to act in accordance with natural law.
One thing that these rules commanded was to go against one’s natural feelings. St. Augustine taught in the fourth century C.E. that human nature was contaminated by sin and, as such, human feelings were mired in lust, sloth, avarice, and the other deadly sins. In stunning contrast to modern times, Aquinas held that thinking about ethics was emphatically not about examining one’s feeling. Instead, it was a matter of following rules laid down by God and his agents, the clergy theologians of the Church.
An example of natural law theory in medical ethics concerns homosexuality. Aquinas believed that God made two sexes for procreation and that it was natural and rational for a man and woman to mate to have children. On the other hand, for two people of the same gender to have sex (or form a lifelong union) was contrary to natural law, and hence, immoral.
One problem with natural law theory is seen in the above example in that what is considered “against natural law” may vary over the centuries. Many rational people today do not consider homosexuality to be unnatural, especially because it has been practiced since the beginning of human history and because some great cultures, such as the ancient Greeks, celebrated it as ideal.
As another example of problems of natural law theory, consider sex in marriage. Augustine held that the only permissible justification for sexual relations between a man and a wife was to produce children. Modern Catholic teachings is very different, and regards loving sexual relations between man and wife as natural and good, even when there is no desire to have children. Indeed, the Catholic Church today holds in vitro fertilization to be immoral precisely because no act of loving sex is involved between man and woman.
Natural law theory bequeathed to medical ethics the famous doctrine of double effect. This doctrine held that if an action had two effects, one good and the other evil, the action was morally permitted: (1) if the action was good in itself or not evil, (2) if the good followed as immediately from the cause as did the evil effect, (3) if only the good effect was intended, and (4) if there was as important a reason for performing the action as for allowing the evil effect. For example, exceptions could be made to the rule banning abortions in cases of an ectopic pregnancy (an embryo growing in a fallopian tube) and a cancerous uterus (where uterus and fetus had to be removed together). In both cases, this doctrine would allow abortions if the direct intention was to save the life of the mother. Similarly, the doctrine of double effect would not allow physicians to assist in executions, since it would not allow a direct intention to assist in the taking of a life, although it might allow a physician to be present to ease the suffering of a prisoner in the event of a botched execution.
Also derived from the natural law tradition is the principle of totality, which covers what kinds of changes may be made to the human body: Changes are permitted only to ensure the proper functioning of the total body. The underlying idea is that one’s body is not something that one owns, but that one holds in trust for God: “The body is the temple of the Lord.” So a gangrenous leg may be amputated or a cancerous breast removed, because the fundamental health of the body is at risk from these threats. According to this principle, we are given our bodies as they are for a reason and we should not change our bodies for frivolous reasons. Thus
the principle of totality rules out all forms of sterilization to prevent pregnancy – vasectomy, tubal ligation, and hysterectomy – because producing pregnancy is a natural function of the bodies of men and women. The principle also forbids cosmetic surgery solely to change one’s appearance, such as breast reduction, breast augmentation, rhinoplasty, and liposuction.
This principle is more deeply embedded in our thinking than we may at first think. When a news photograph in 1996 showed a mouse whose genetic system had been altered to grow a human ear on its back, many people felt disgust at seeing this mouse-with-human-ear. This disgust arose from a sense that the creation of this being had violated the bodily integrity of both humans and mice.
Social Contract Theories
Social contract theory, or contractarianism, is essentially secular, independent of
belief in God. Contractarians assume that people are fundamentally self-interested and that moral rules have evolved for humans to get along with one another. It is rational for humans to agree to such rules because otherwise, everyone will pick up the sword and be worse off.
Social contract theory does not separate ethics from politics. Indeed, hypothetical political bargaining is viewed as the foundation of the kind of behavior that is allowed as ethical. (Hypothetical because contractarians do not believe people ever came together to make the basic social contract.) Plato described one early kind of hypothetical social contract in the The Republic, but the phi1osopher who really gave this theory weight was the Englishman, Thomas Hobbes (1588-1679).
Hobbes believed that the most detestable condition for humans was the state of nature, a premoral agglomeration of self-interested individuals for whom life was (he said, famously) “solitary, poor, nasty, brutish, and short.” By the use of their reason, people realize that each is better off in a society of moral and legal rules backed by the force of opinion and law. They therefore form a social contract to create “society” to better themselves.
Contractarianism can support both minimal and maximal government. To oversimplify, let us contrast two extreme champions of contractarianism: Libertarians and Rawlsians.
Libertarians favor government for defense and for very limited public works, perhaps not even including national parks or a public interstate road system (we could have private, toll roads). They disfavor government programs such as Medicare, Medicaid, disability insurance, food stamps, and welfare. Libertarians oppose forced taxation by the government, especially when it redistributes property and income from rich to poor. They champion the property rights of the status quo, but tend to be silent about how those enjoying the status quo acquired their property. Libertarian philosophers such as Harvard’s Robert Nozick see forced taxation as equivalent to forced labor, that is, to slavery.
Accordingly, Libertarians oppose mandatory F.I.C.A. taxes on all workers’ pay for Medicare and for the Hospital Insurance Trust Fund. Even though federal programs such as Medicare have made American physicians rich, libertarian physicians would rather have no government control over their business. Presumably, in a libertarian society, physicians would be reimbursed only in cash.
Critics say that in such a system, fewer hospitals would be built, elderly patients would frequently forgo procedures for lack of money (as never happens under Medicare), and physicians would earn far less money. It is also true that in such a system physicians would be controlled by no federal regulations.
Rawlsians are named for John Rawls, a Harvard colleague of Nozick. Rawls believes that the social contract should have moral restraints imposed on it. The most important restraint is what Rawls called the “veil of ignorance,” meaning that in the hypothetical social contract, no one would know his or her age, gender, race, health, number of children, income, wealth, or other arbitrary personal information. Rawls’ theory is contractarian in that it assumes that people are self-interested and are forced to form a social contract to choose the basic institutions of their society; on the other hand, it is Kantian (as we shall see in the next section) in that it imposes impartiality on the choosers.
Rawls argues, controversially, that the only rational way to choose under the veil of ignorance is as if one might be the least well-off person in society (because a person doesn’t know anything personal under the veil, he doesn’t know what place in society he occupies). This justifies the choice of his famous difference principle: Choosers should opt for institutions creating equality unless a difference favors the least well-off group. Everyone should be trained in medicine unless training only a few is better for the least well-off. The choice of the difference principle, as the archprinciple of this theory of justice, can be seen as the imposition of the golden rule on the choice of the structure of society.
Rawlsian justice entails that every citizen should have equal access to medical care unless unequal access favored the poor (an unlikely prospect!). Rawlsian justice attempts to reduce the natural inequalities of fate; hence, it is especially important that children and those with genetic disease have good medical care. Let us consider these two classes combined: children with genetic disease. Their care takes up a large share of resources in children’s hospitals, and costs for their care may be deliberately excluded in for-profit insurance plans. Nevertheless, for Rawls, such children deserve good medical care as a matter of justice.
Indeed, as genetics reveals new insights every year, we stand now under a real, not hypothetical, genetic veil of ignorance about our future illnesses and those of our children and grandchildren. The coming decade will identify much more precisely who is susceptible to genetic disease and who is not. In the future, it may be much more difficult for those with familial lines of genetic disease to purchase private medical insurance. Some of the people now attacking national medical plans may find themselves at risk.
Libertarians favor private medical insurance plans in which the healthy do not subsidize the unhealthy. Rawlsians see “healthy” and “unhealthy” as arbitrary distinctions, due more to genetics and fate than individual merit. Libertarians would allow for-profit companies to practice experience rating, whereby citizens with preexisting illness may be excluded (and genetic disease is increasingly being defined in this way). Rawlsians favor community rating, whereby risk and premium rates are spread over all members of a large community, such as a state or nation (for example, a federal single-payer system).
John Rawls is a modern Kantian using a social contract methodology. Immanuel Kant (1724-1804) published during the Enlightenment (that is, about the time of the American Revolution), and believed in the power of humans to use reason to solve their problems.
Kant was raised by conservative Protestant parents and was strongly oriented to conservative religious ethics until he studied science at his university, where-upon he became skeptical of his former beliefs. He continued to believe that many of the basic values and attitudes of Christian ethics were correct, but then he had a problem of how to justify those values. His solution was to base those values on abstract reason rather than on metaphysical beliefs about God or an afterlife.
The distinctive elements of Kantian ethics are these:
a. Ethics Is Not a Matter of Consequences but of Duty. Why an act is done is more important that its good or bad results. Specifically, an act must be done from the right motive, and the right motive is the desire to do one’s moral duty. In its emphasis on motives and not consequences, Kant’s ethics are Christian.
Kant’s ethics are an ethics of duty (also called deontological, from deontos, duty)
because they emphasize not having the right desires or feelings, but acting correctly according to obligation. Only acts done from duty, and not, say, from compassion, are praiseworthy. For Kant the correct motive for treating a patient well is not because a physician feels like doing so, but because it is the right thing to do. When we act morally, Kant says, reason tells feelings what to do. Contrary to popular culture, we should not consult our feelings about what to do but reflect upon what is our duty.
Kant says the only thing valuable in the world is a good will, the trait of character indicating a willingness to choose the right act simply because it’s right. But how do we know what is right? What is our duty? Kant gives two formulations.
b. A Right Act Has a Maxim that Is Universalizable. An act is right if one can will its “maxim” or rule to be acted on by all others. “Lie to get out of keeping a promise” cannot be so willed because if everyone acted this way, promise-keeping would mean nothing.
c. A Right Act Always Treats Other Humans as Ends-in-themselves, Never as a Mere Means. To treat another person as an “end in himself” is to treat him as having absolute, infinite moral worth, not relative worth. His welfare cannot be sacrificed to the good of others or to my own desires. So patients cannot unwittingly be used as guinea pigs in dangerous medical experiments to advance knowledge.
Consider the case of a pulmonary resident who discovers that he missed a small lesion three months previously on the x-ray of a 48-year-old patient. The patient now has level four untreatable cancer. The patient says, “I guess that cancer just grew out of nowhere because it wasn’t there three months ago.” Should the resident tell the patient the truth? A consequentialist might argue that he should not because it could do no good for the patient.
But for Kant, the answer is clear: The patient must be told the truth. Why? The only universalizable rule is “Always tell patients the truth.” Such a rule is the basis of trust and of treating patients as “ends in themselves.” If the physician was a patient, he would want to know the truth. The resident may feel that that he shouldn’t reveal the truth but his reason will tell him what his duty is.
d. People Are Only Free When They Act Rationally. Kant would agree that much of how we act is governed by our emotions and other, nonrational parts of upbringing. But controversially, Kant denies that we are truly acting morally when we do the right thing because we are accustomed to it, because it feels right, or because our society favors the act. The only time a person can act morally is when she exercises her rational, free will to understand why certain rules are right and then chooses to bind her actions to those rules. Kant calls the capacity to act this way autonomy. For him, it gives humans higher worth than animals.
It follows for Kant that very few people act morally. Kant accepts that fact. It was also true that in early Christianity, very few people were thought to be capable of salvation. The purity of Kant’s view entails a moral elitism for the few who can successfully follow Kantian ethics.
e. Problems in Kantian Ethics. Kantian ethics has several problems. First, Kant is regarded as the supreme rationalist in ethics because he claimed that anyone who disagreed with his view was guilty of a logical contradiction. But the utilitarian lifeboat commander, when he will not let everyone board to save those in the boat, does not contradict himself (he can will the maxim, All those in control of lifeboats should maximize survivors, even if it means denying access to some in the water.)
Kant is generally regarded as failing in his Enlightenment project. His critic and contemporary, the Scottish skeptic, David Hume, came close to arguing that ethics is really emotivisim. Charles Darwin and the father of psychiatry, Sigmund Freud, later agreed with Hume that reason is the tip of the moral iceberg because much of ethical life is emotional and not changeable by reason. Emotivisim and Kant’s rationalism are the two extreme views on the issue of the place of reason in ethics.
Other problems of Kantian ethics remain. For one thing, it fails to tell us how to resolve conflicts between competing, universalizable maxims. Its best answer is to try to universalize whatever ad-hoc solution to the conflict seems appropriate. But then our sense of what is appropriate, not our ability to universalize without contradiction, is the test of an act’s morality. For another thing, it seems ridiculous to imply that consequences never count morally. Many critics believe that Kantians indirectly appeal to consequences in thinking about what to universalize. Finally, the ideal of treating each person as if he had infinite value is not always
practical: It does not tell us how to deliberate about trade-offs when, by definition, some humans will die in triage situations and cannot be treated as “end in themselves.”
f. Kantians Reply. Nevertheless, Kant provides useful insights to medical ethics. He would favor using a lottery to distribute a lifesaving but expensive new drug that most patients will be unable to obtain. He would argue that the captain of the lifeboat should draw straws to decide who gets to stay in the boat. His emphasis on people as “ends in themselves” explains the outrage that people have when learning of scandals involving medical experimentation, such as research done by Nazi physicians. Finally, perhaps Kant’s most important legacy to modern medical ethics is his emphasis on the “autonomous will” of the free, rational individual as the seat of moral value. Autonomy explains why informed consent is necessary to legitimate participation in an experiment. When combined with the emphasis on personal liberty in our democracies, Kant’s emphasis on autonomy sets the stage for modern medical ethics.
Utilitarianism originated in the late 18th and early 19th century England as a secular replacement for Christian ethics. Jeremy Bentham (1748-1832) and John Stuart Mill (1806-1873) were its two chief theorists. The essential idea of utilitarianism is that right acts should produce the greatest amount of good for the greatest number of people, which is called “utility.”
The Puritans in England and America wanted to organize society so that everyone had to obey their rules, but utilitarians saw morality as a human construct that should minimize harms of humans to each other and maximize group welfare. For Christians, Jews, or Muslims, morality is inconceivable without God’s existence, but not so for utilitarians.
Likened to the counterculture movement of students in the 1960s and 1970s, utilitarianism was a reform movement intended to humanize outmoded institutions. Developed by social reformers Jeremy Bentham and James Mill (the father of John Stuart Mill), it focused on large, practical changes that could benefit the vast majority of people who were not aristocrats.
Utilitarianism did not urge people to turn the other cheek and hope for justice in another life, nor did it exalt those virtues so cherished by England’s aristocracy: stylish dress and manners, personal honor, literacy, scientific and artistic accomplishment, and patriotism. The foundation for reform came in 1832 in eliminating pocket boroughs under the control of one great landlord and in extending the vote to the 20 percent of the adult male population who had some property (property-less male and women still had no vote). Utilitarian reformers also campaigned against slavery in the British empire and the intolerable factory conditions made famous by Charles Dickens in novels such as Hard Times. (Their Factory Act forbade employment of children under age nine in cotton mills and declared that 13-year-olds could work no more than 12 hours a day). Similar bills were passed to make mining and industrial machinery less lethal to workers.
They also attacked the penal system, passed the Com Laws, ended debtor’s prison, opposed capital punishment for petty thefts, and advocated the vote for women. They urged public hospitals for the poor, proper sewage disposal, the penny post so that everyone could send and get mail, and created a central board of health, so that municipalities could create facilities for clean water, waste disposal, and sewers.
Utilitarianism’s essence can be summed up in four basic tenets:
1. Consequentialism: Consequences count, not motives or intentions.
2. The maximization principle: The number of people affected by consequences
matters; the more people, the more important the effect.
3. A theory of value (or of “good”): Good consequences are defined by pleasure
(hedonic utilitarianism) or what people prefer (preference utilitarianism).
4. A scope-of-morality premise: Each being’s happiness is to count as one and no
For utilitarians, right acts produce the (2) greatest amount of (3) good (1) consequences for the (2) greatest number of (4) beings.
Each of these tenets can be controversial. Bentham emphasized that the meaning of the fourth tenet was whether a being could suffer, not whether it was human or animal. As such, utilitarianism includes animals in its calculations of the greatest number.
To the modern utilitarian Peter Singer (and author of the famous Animal Liberation), utilitarianism was in advance of its time in not differentiating between the sufferings of humans and those of animals. Utilitarianism also seems to imply that every being’s happiness on the planet matters, not just beings of my society. Singer also says that morality doesn’t stop at the borders of his country.
Virtue ethicists and Kantians regard a person’s motives as a sign of his character. John Stuart Mill says that the drowning man doesn’t care why the lifeguard is swimming out to sea to rescue him, just that the lifeguard is coming. Utilitarians think motives only count insofar as they tend to produce the greatest good.
In medicine, it makes a difference whether a physician listens because she really cares about patients or because she’s found that having satisfied patients is an effective way to maximize income. A utilitarian might argue that if the physician’s techniques are good enough, whether she really cares about her patients matters very little; in either case, the behavior produces good consequences to real people.
Utilitarianism is also a theory of value (that is, a theory about what is a harmful consequence and about what is a good one). The simplest theory of value is hedonic utilitarianism, which equates a good consequence with pleasure, and harm with pain. Negative utilitarianism focuses on relieving the greatest misery for the greatest number, as in famine relief. Positive utilitarianism focuses on benefiting humanity. Utilitarian theorists debate whether some things are intrinsically valuable, such as pride and honor, or whether they are good only because they create good feelings in people over the longrun. Another view is called preference utilitarianism, and its adherents believe that utility is maximized by furthering the actual preferences that people have. Finally, pluralistic utilitarians hold that many different things or states are valuable.
The maximization tenet can get utilitarians into trouble. Wouldn’t utilitarianism be willing to violate the traditional sanctity-of-life principle to save many people? Here, Utilitarians bite the bullet. They think that the Nazi generals who tried to kill Hitler in 1944 at Wolf’s Lair were justified. They think that on the expedition to the South Pole, commander Robert Scott should have allowed his crew member with the gangrenous leg to die, rather than slowing down the whole party by carrying the injured man, which resulted in the death of all. They think that if
an FBI sniper saw a terrorist about to detonate a bomb in a skyscraper full of innocent people, the sniper should shoot the terrorist.
These are the easy cases. The hard ones come in population policy. If more happiness is better than less, why shouldn’t we create the maximal number of people on the planet? So long as each new life has more happiness than misery, and so long as everyone else’s life has at least the same, shouldn’t we produce more? This “total view” of utilitarianism is universally seen as what philosopher Derek Parfit calls “The Repugnant Conclusion,” because we think the average happiness is more important. But it is difficult to see why utilitarianism entails maximizing
average happiness and not the total good, so it may be stuck with this counterintuitive implication.
More specifically to medical ethics, wouldn’t utilitarianism permit the sacrifice of an innocent, healthy person to transfer his organs to four patients who needed them to live? Aren’t four people alive better than one? If consequences and number of lives define morality, what’s morally wrong with doing so? Yet it certainly seems morally wrong to chop up an innocent patient this way.
One traditional reply among utilitarians is to distinguish between act and rule utilitarianism. Rule utilitarians believe that normal moral rules, such as “First, do no harm” in medicine, maximize utility over the decades. Act utilitarians advocate judging each act’s utility. Some act utilitarians think rule utilitarianism has a dilemma: If there are exceptions, then you ultimately have act utilitarianism (since you never know in advance whether a particular situation needs to be judged as an exception); if there are no exceptions, then you are close to a Kantian and only a nominal utilitarian. If “First, do no harm” has no exceptions in medical ethics, it may explain why it is wrong to chop up an innocent person to transplant his organs to four others.
In medicine, the two areas where utilitarianism applies most powerfully are public health and triage situations. It is likely that improvements in public health have helped more people live longer (created more “utility”) than all the drugs and surgeries ever invented. The English physician John Snow might have agreed: In 1849 he advocated clean water to prevent cholera epidemics, which were spread by contaminated water. (It took over 40 years and many more cholera epidemics for Snow’s ideas to prevail.) It doesn’t matter why Snow improved the water supply, only that he did and that many millions of people now live decades longer.
Triage involves the apportionment of scarce resources during emergencies when circumstances preordain that not all victims will live. Because consequences count, utilitarianism says a physician should not treat each patient equally, but should focus only on those whom he can actually benefit. Rigorous application of this principle gives utilitariantsm its famous hard edge: A physician should abandon those who will die even if he helps and, just as ruthlessly, abandon those will live without his help. He should help only those who waver between life and death and for whom he can make the difference. The goal is to save the maximal number of lives.
This point illustrates an ambiguity in sanctity-of-life ethics. Traditionally, sanctity-of-life ethics such as Kant’s emphasize the absolute value of each individual, implying that the physician should at least comfort those who are beyond his help. But utilitarian-triage ethics maximizes the value of life in saving the maximal number of people who will eventually live.
Principles and Medical Ethics
One modern method of analysis is to analyze a dilemma or case of medical ethics in terms of four powerful principles. According to advocates of this method, deciding what is the right thing to do in a particular case involves applying and balancing all four principles. These principles are clearly chosen as a distillation of the ethical theories described above.
What do each of the principles mean? Autonomy refers to the right to make decisions about one’s own life and body without coercion by others. This principle celebrates the value that democracies place on allowing individuals to make their own decisions about whom to marry, whether to have children, how many children to have, what kind of career to pursue, and what kind of life they want to live. Insofar as is possible in a democracy, and to the extent that their decisions do not harm others, individuals should be left alone to make fundamental medical decisions that affect their own bodies and lives.
John Stuart Mill was a political theorist as well as an ethical theorist. In his most famous work of politics, On Liberty (1859), he defends “one very simple principle,” his so-called harm principle: that “the only purpose for which power can rightfully be exercised over any member of civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. ...Over himself, over his own body and mind, the individual is sovereign.”
Such political individualism corresponds to personal autonomy in ethics. Since the beginnings of modern medical ethics in the early 1960s, autonomy has meant the patient’s right to make her own decisions about her body, including dying and reproduction.
The ethics of autonomy evolved as a rejection of paternalistic ethics. During the patient rights movement in the early 1960s in America, paternalistic physicians were scored as sexist octogenarians who would impose their rigid traditions on a more enlightened, freethinking, younger generation. Both secular and religious versions of virtue ethics tend to be paternalistic, especially when they emphasize the physician’s greater wisdom and when they teach young physicians to follow the lead of older physicians in ignoring wishes of patients. These traditional, somewhat rigid, secular and religious roles of good physicians contrast starkly with the dominant value of more universal, modern theories of ethics, including the principle of individual autonomy.
In the first two decades of bioethics (1962-1982), autonomy was considered by many bioethicists to be the supreme value above all others, grounding the right of competent adults to end their lives when they choose and to decline to participate in dangerous experiments. Since then, bioethicists have realized that other values are also important, which must be weighed with autonomy in dictating answers in particular cases.
Beneficence, “doing good to others,” is clearly tied to the Judaeo-Christian-Muslim virtue of compassion and helping others. The application of the principle of beneficence comes to the fore in efforts to distinguish therapeutic from nontherapeutic experiments on patients. If a physician means to help diabetic patients, an experiment on diabetic patients (with their consent) is justified by this principle. If the experiment is nontherapeutic, some other justification is required.
Beneficence can be seen both as a principle and a virtue for physicians. Physicians receive special powers, income, and prestige from society. In return they are asked to dedicate their careers to helping others. Medical training requires this trait as demands on a student increase between premedical years and residency. Self-sacrifice is part of medicine. Ideally, physicians should want to help others, but if the internal desire is lacking, they should still help others from a sense of duty. The principle of beneficence spells out this duty.
Beneficence may sometimes come into conflict with autonomy (as, indeed, any of these principles may conflict with each of the others in a particular case). Consider the involuntary psychiatric commitment of schizophrenic, homeless people. Is it better to let such people wander the cold streets of a big city, or to incarcerate and medicate them against their will? Should we let them “die with their rights on” or inject them with sedatives and antipsychotic drugs” for their own good”? Maybe we should do nothing at all and not risk making them worse off. After all,
who are we to say that it is “beneficent” to do so? Maybe homeless schizophrenics want to stay as they are. How beneficence and autonomy are balanced in particular cases is not easy to understand. (Indeed, since John Stuart Mill advocated both utilitarianism and the value of autonomy, critics have wondered whether his views were actually consistent.)
Nonmaleficence, “not harming others,” echoes an ancient maxim of professional
medical ethics, “First, do not harm.” Above all, this maxim implies that if a physician is not technically competent to do something, he shouldn’t do it. So medical students should not harm a patient by practicing on them (unless the patient consents): Patients are there to be helped, not to help students learn. At the very least, patients should not leave an encounter with a physician worse off than they were before. This crucial principle of medical ethics prohibits corruption, incompetence, and dangerous, nontherapeutic experiments.
The principle of nonmaleficence also accords with Mill’s harm principle and contractarianism: Both of these are minimalist moralities implying that the state and society should not attempt to shape all citizens’ lives for the goals of one worldview. In a fundamental sense, the first obligation we have is to one another alone, especially those who do not want our help, advice, or even concern. That means, above all else, not harming others by unsolicited intrusions.
The last principle, justice, has both a social and political interpretation. Socially, it means treating similar kinds of people similarly (this is the so-called “formal element” of the larger principle). A just physician treats each patient the same, regardless of his insurance coverage.
Politically, the principle amounts to distributive justice, and thus in medicine, to the allocation of scarce medical resources. Because there are many theories of justice, this principle is not self-evident. For example, Rawls’s theory of justice demands that medicine serve the worse off people. But another view equates justice with simple egalitarianism: Medicine is just if it treats each patient equally. Of course, that goal would not be easy to achieve either, and doing so would go a long way towards realizing Rawls’s ideal. At the very least, it would mean a guarantee of equal access to medical care for every citizen, such that insurance coverage would not be a factor (as it is now) in selection of which patient receives an organ transplant. Finally, justice can be interpreted in a libertarian sense of treating anyone with the ability to pay the same. In this sense, it means not treating people who cannot pay.
It is obvious that interpretation of the principle of justice is difficult, especially when an interpretation of this principle must be used with the three other principles in a particular case. However, in the most normal sense, justice requires physicians to treat patients impartially, without bias on account of gender, race, sexuality, or wealth. Even in such a minimal sense, justice requires a high standard of behavior among physicians.
Feminist Ethics: The Ethics of Care
In the early 1970s a modern version of feminism shook American medicine to its
foundations and buttressed its sister movement, the patient’s rights movement. Both movements attempted to take patients’ decisions about their bodies and lives away from physicians –especially male physicians – and give women and patients control.
The landmark book was Our Bodies, Ourselves, by a group of women patients in Boston who had access to one of the grandest – some would say, most self-satisfied – medical centers in the world, Harvard. Because they couldn’t get the information they wanted in down-to-earth, patient-friendly language, they published a “how-to” manual covering everything from breast cancer to abortions. Successive editions sold millions upon millions of copies and gave rise to the areas of publishing now called “alternative medicine” and “self-help”.
During the 1980s, feminist philosophers began to question whether many ways of knowing were the ways or merely male ways. Contractarianism, Kantianism, and utilitarianism all looked like male theories, too abstract, too intellectual, and largely false to the ordinary experience of many women. What was missing was emphasis on values such as cooperation, nurturing, and bonding.
Harvard education professor Carol Gilligan showed that many women analyzed ethical dilemmas differently from men. Subsequently, feminist theorists articulated theories of ethics whose central notions were not rights or universalization but caring, trust, and relationships. This so-called “ethics of care” may be considered a branch of virtue ethics that promotes the “female” virtues of caring, nurturing, trust, intimate friendship, and love. Even among feminist theorists, this statement is controversial because some theorists believe that such virtues are not
inherent in women by nature but exist only because they are encouraged in most women by traditional, sexist gender roles.
One might view the ethics of care as a corrective to the previous emphasis in ethical theory on abstract, semilegalistic concepts. Alternately, one might consider the ethics of care as reflecting a modern turning inward to the family and to those around one, fighting battles close at hand and letting far-off concerns such as world hunger take care of themselves. Finally, one might view this approach as taking a more modest, minimalist approach to morality – a kind of “within-my-circle-of-relationships” approach – in which moral concerns usually arise among those one knows.
Perhaps the ethics of care is best seen as an antidote to moral views that are cast only in terms of rights, utility, and duty. It is not yet a complete ethical theory, for it does not tell us how to treat people we do not know or care about. This is an important criticism in medical ethics because much of medicine is about treating strangers, at least when patients first meet a physician. It may be retorted that good physicians should care for all their patients, but the meaning of “care” gets too diluted when someone claims they care about everyone they meet. Nor does this theory yet tell us how to resolve conflicts among those we care about, such as when a female physician is tom between checking on a patient and being with her daughter at the birth of her first grandchild. This theory, however, is still very young and in coming decades, may have more to offer.
Many physicians and medical ethicists do not find any of the theories described above very useful to their practice of medicine. To force the complexities of many medical cases into a preconceived, abstract framework is often to be guilty of over-simplification, and when that happens, the truth is rarely discovered.
In the past decade a new approach has been articulated that bases moral reasoning on paradigms or model cases. These paradigmatic cases serve as a basis from which a person can generalize to other, similar cases; for example, both Karen Quinlan and Nancy Cruzan were young women who went into lifelong comas called “persistent vegetative states” after, respectively, a drug overdose in 1975 and an automobile accident in 1983. In both cases, parents decided after many months that their daughter’s biography was over and wanted to end the mere life of the remaining body. Karen Quinlan’s case focused on removal of a respirator; Nancy Cruzan’s on removal of a feeding tube. Both cases resulted in landmark legal decisions in, respectively, 1976 and 1990.
Advocates of case-based reasoning believe that study of these two famous cases can teach us a lot about how ethics in medicine has actually worked over the last two decades. Paradigms are bedrock cases from which we generalize in ever-expanding circles of similarity. By understanding and analyzing arguments on both sides – about killing and letting die, ordinary versus extraordinary treatment, forgoing versus withdrawing treatment, standards of brain death, and models of proxy consent for making decisions about incompetent patients – we can hope to increase our understanding of related issues in medical ethics.