The Myth of ‘Just do it’: Thought and Effort in Expert Action preface


Reproducibly Superior performance



Download 0.98 Mb.
Page34/124
Date25.02.2021
Size0.98 Mb.
1   ...   30   31   32   33   34   35   36   37   ...   124
Reproducibly Superior performance

In response to perceived problems with the prior definitions of what it is to be an expert based on peer nominations and domain-related experience, Ericsson has come to understand the notion of what it is to be an expert in terms of reproducibly superior performance: experts are those who effect superior reproducible performance in their area of expertise (1991, 2006, 2007, 2008). For example, he tells us that “chess masters-will almost always win chess games against recreational chess players in chess tournaments, medical specialists are far more likely to diagnose a disease correctly than advanced medical students, and professional musicians can perform pieces of music in a manner that is unattainable for less skilled musicians” (2006, p. 3). Experts, on this understanding of the term, are, in their domain of expertise, a cut above the rest of us, and consistently so.

Yet what is superior performance? Ericsson is not looking for a God’s eye point of view, but rather for a relative one: as he puts it in one place, an expert’s performance is “at least than two standard deviations above the mean level in the population” (Ericsson and Charness, 2004: p. 731). On this definition, if your skill is two standard deviations above the mean (that is, better than approximately 97.725 % of the population at a task), you are an expert at it. One might quibble that any criterion that draws a sharp line is ultimately unlikely to be satisfactory. A difference in skill between 97.72% and 97.73% should not make or break an expert. So the line needs to be fuzzy: it is not that one turns into an expert when one’s abilities are at least two standard deviations above the mean, but rather that expertise occurs when one’s abilities are around more than two standard deviations above the mean. Assuming that performance is measurable with quantitative data, this might seem to be a simple fix. But I think it is not a complete one.

In defining expertise relative to the ability of others, we need to specify the population of comparison. Clearly, we do not want to end up saying that Olympic marathon runners of the distant past were not experts because many of today’s serious amateur runners are comparatively faster. So it seems that the comparison class should be contemporaries. But does this mean that the comparison class should be the entire living population? It might not be very difficult to be in roughly the top percentile in an activity in which few perform. For example, since the vast majority of the world’s population does not ice skate at all, any individuals who have simply tried to ice skate for a few hours might put place such individuals in the 99th percentile. And the student of poultry sexing, after one good lesson, easily rises into the top percentile in poultry sexing ability, simply because having any knowledge about this at all is so rare. Perhaps one way around this would be to raise the bar for ice skating and other uncommon skills, and lower it for more widely practiced skills, such as running. However, we then need a prior criterion of expertise that tells us how high the bar should be for each activity.

Perhaps, instead of making expertise relative to the entire population, the comparison class should be only those who have engaged in the activity. So if we’re interested in identifying the expert ice skaters, we find only the top percentile of ice skaters. This would lead to better results for skills for which there is a normal distribution in activity, but perhaps ability in some skills does not lie along a bell curve. For example, perhaps some extremely creative individual invents a highly original and highly complicated game, NewGame, and teaches it to a few hundred people in a weekend seminar. The inventor of NewGame is amazingly good at it and can consistently beat her students, all of whom have roughly the same level of ability at playing, which is determined by their scores at a 50-game tournament at the end of the weekend, wherein many games end in ties, yet a few end up winning a couple more games than most of the others. Here, we have the inventor of the game who knows how to play it extremely well, and a few students, who, in comparison to the inventor of the game, are terrible at it, counting as the expert players given a definition of expert skill as skill lying in the 97.7% of a population of people with that skill. Yet since the ability difference between the teacher and students in the ‘expert’ pool is astronomical, while the ability differences among all the students is minor, this would seem an unwelcome conclusion.39

A related problem arises in situations where everyone who does a certain activity is, we would want to naturally say, very good at it. For example, perhaps because period instruments are so expensive—it might be that most everyone who makes the investment is determined to work hard at it and thus develops a high degree of skill. If our comparison class is the general population, we do end up with all of these great period instrumentalists residing at the uppermost percentile of ability. However, if our comparison class is those who play these instruments, this standard would necessarily count the vast majority of such musicians as non-experts. And, just as students in an honors class hate grading on the curve, such musicians would likely balk at curving the concept of expertise in this way as well.

Remember, I have a specific goal in mind in addressing the question of what it means to be an expert, and that is not to find the true meaning” of the term “expert” if there be such a thing, but rather to arrive at a stipulative definition of the term, which will capture the group of individuals for whom I claim that the just-do-it principle does not apply. Accordingly, the claim that expert ability is ability (around) two standard deviations above the mean, does not serve this purpose, since it might lead us to count someone with quite ordinary everyday abilities as an expert. For example, for certain activities, people differ little in ability. Perhaps shirt buttoning is like this (at least among cultures where button-down shirts are a common mode of attire). Perhaps if tested on some sort of time and accuracy trial (how many buttons can you fully fasten in some number of minutes) we would find a normal distribution with some people emerging in the top 99%, yet, for the purposes of arguing against the just-do-it principle, as a principle about expert action, I would not want to count such individuals as necessarily experts.40 Furthermore, on this criterion, we may be led to count people who have natural extreme abilities as experts. For example, there are savants who, with apparently no practice or training, can tell you what day of the week any calendar date lands on. Sometimes such abilities might even manifest themselves after a head injury; a child with “acquired savant syndrome” may have developed the ability to perform calendar calculations only after being knocked unconscious with a baseball. Such individuals are certainly in the top 99% of the general population in terms of their skill in calendar calculations. And in some contexts it makes sense to call such individuals experts. However, since I do not want to say that the abilities of the savant are necessarily effortful, I shall exclude them from the category. (If you think that the notion of being an expert should rightly cover savants, you will need to simply understand savants as being exceptions to my claim that the just-do-it is a myth.)

There is also the question of how to determine whether someone falls into the top percentile of ability. In certain realms, such as the world of tournament chess, there are clear standards as to what counts as superior performance—if one player is able to consistently beat another player, the one counts as superior to the other. But occasionally researchers on chess expertise take a chess player’s ability to choose the better move (based on a computer analysis) from a difficult position as indicative of better skill. And in typing, speed and accuracy have traditionally been identified as criteria by means of which we can judge expertise. However, other areas are much more subjective. There is no direct test of skill that can weed out the expert abstract expressionist painter from the novice. And there is no set of questions one can ask of individuals with philosophy training such that, if answered correctly, some are revealed as experts. It is often more how they say it rather than what they say that matters, yet the how is less objective than the what (a fact that drives college assessment committees crazy).

Ericsson mentions that beyond Olympic standards for athletic events, “more recently, there have emerged competitions in music, dance and chess that have objective performance measures to identify the winners” (2008: p. 989). And I could add that there is even such a thing as a philosophy slam competition41; my ten-year-old son entered one. Such competitions are not only recent inventions: during Roman Emperor Nero’s time (AD 67), competitive poetry reading was on the list of Olympic events. Yet it is an open question how well such competitions do at identifying experts.42 The violinist Arnold Steinhardt, who himself won the Leventritt International Violin competition, sees competitions in music like this:
You were a nag in a horse race with a number on your back. There’s nothing wrong with a real horse race—the first one across the finish line wins—but how does one judge a musical entrant in a competition? By how fast he plays, how few mistakes he makes? How does one grade beauty, after all? … The winners often triumph because of what they didn’t do: they didn’t play out of tune, they didn’t play wrong notes, that didn’t scratch, that didn’t do anything offensive. Contestants who commit these sins are quickly voted out, but they may be the ones to turn a beautiful phrase and play with great abandon, the ones who reach out to the listener’s heart and mind. (1998: p. 37)
Even in chess there is some question about the accuracy of the rating system, and in typing there is still a question of how much weight should be placed on accuracy versus time.

Finally, one might ask whether laboratory performance is indicative of the level of skill an individual has attained. A researcher might calibrate subjects’ level of golf expertise by looking at a golfer’s ability to make a putt in the lab. But laboratory settings are not real-life tournament settings, and it seems possible that there could be someone who does spectacularly in the lab, yet performs abysmally during actual games, perhaps because of some sort of performance anxiety. Is this person still an expert?43 I cannot presume to answer this question, but it is interesting to note that in some areas we seem to allow for a bit more leeway than others. If all that matters is performance in the concert hall or during a tennis tournament, for example, then an individual with extreme performance anxiety, even if she performs superbly in rehearsal and in the lab, would not count as an expert. But this might not be the right result. Certainly, in some areas of expertise, performance in the field is all that matters: for example, if a surgeon frequently gets nervous in the operating room so much so that her hands shake and she cannot perform her job well, she is not, I think we would want to say, an expert surgeon. With musicians we allow more leeway, and might allow someone such as Barbra Streisand, who has severe performance anxiety, to count as an expert based on his or her recordings.44





Share with your friends:
1   ...   30   31   32   33   34   35   36   37   ...   124




The database is protected by copyright ©essaydocs.org 2020
send message

    Main page