Reputation (or peer nomination) is, in fact, one means by which some researchers identify the experts in a field. And in our everyday thinking, we may do so as well. For example, in deciding on what physician to see in the United States, you may look at Castle Connoly Medical Ltd.’s selection of “top doctors”, which is based on peer nominations;35 in finding an American lawyer you might search the “Super Lawyers” page, which selects lawyers in part based on reputation among peers;36 and in philosophy, though philosophers are not ranked, some look to what is called the Philosophical Gourmet Report in order to see how philosophy departments are ranked in terms of the reputation of faculty members.37 But how accurate are reports of an expert’s reputation, such as those based on culling peer nominations, in identifying experts?
Clearly there are some problems with this approach. Ericsson and Lehmann (1996), for example, have argued that peer-nominated expert stock futures traders and psychotherapists sometimes perform no better than so-called novices at these endeavors. And research by Shanteau (1988) suggests that peers might be unduly influenced by others’ “outward signs of extreme self-confidence” (p. 211). Moreover, research by Elstein and colleagues (1978) indicates that diagnostic skills were no better in a group of physicians that were identified by peers as outstanding compared to a group of undistinguished physicians.
One question, of course, to ask about studies that aim to show that peer nominations are not accurate indicators of expertise, is: What standards are being used to determine whether peer nominations are an accurate way of identifying experts? Moreover, what is being judged when it is found that so-called experts perform no better than novices in a certain task? Sometimes the standard is performance on some ‘representative task’ that is designed to be executed in a laboratory setting. But how are we to judge whether performance on this task is more indicative of expertise than peer judgments? Maybe when there is little reason to think that peers have insight into the skill level of others in their area, it is reasonable to question peer nominations. In medicine, for example, doctors typically aren’t patients of very many other doctors, and thus there is little reason to think that they would be able to make accurate judgments of their peers’ diagnostic abilities. Elstein and colleagues (1990) suggested as much in a retrospective analysis of their research into diagnostic performance. In other disciplines, this might not be such a problem. In philosophy, for example, we are more or less all each other’s patients, inasmuch as we read the work of quite a number of other philosophers, and thus we have opportunity to judge our peers’ work more readily than do doctors.
Is there a way to inform peers of the relevant data when they are making nominations, as well as to identify what this data is? Perhaps there is, yet for my purposes, I would still not want to rely on peer nominations as a sole criterion for identifying experts, since such nominations are used to identify the crème of what are usually already defined as an expert crop. I, on the other hand, am interested in a conception of expertise that allows me to say that, for example, there are many expert philosophers who are not members of fifty philosophical institutions identified by the Gourmet Report. Though, of course, if just-do-it doesn’t apply to experts in general, it would be true that it does not apply to the best of experts.
Another reason, however, why I am not inclined to identify experts in terms of peer nominations is that reputation lingers; thus, certain individuals may still have a reputation for being experts even though they no longer care to improve. Yet, for the purpose of criticizing the just-do-it principle, the desire to improve is an integral component of expertise. I do not doubt that there may be some who, having reached the top of their profession, decide to sit on their laurels after years of tireless effort. Such individuals may very well just go through the motions without thinking or effort. And because of their years of training, when they do so, they may still perform very well. Whether such individuals should be rightly called experts, is not, as I have said, a question I can answer. However, I do aim to ultimately arrive at a conception of ‘expert’ that makes sense of my claim that just-do-it is a myth; of course, if this conception captures a good chunk of what we often mean by “expert,” all the better.