As a philosophical theory of the nature of mental states, functionalism entails multiple realizability (MR), that the very same mental or cognitive properties or states can appear in, be implemented by, or take the form of significantly different physical structures (different with respect to how the physical sciences would characterize those structures and their properties). Although Simon’s comments often seem oriented toward methods of investigation or kinds of explanation, he appears to be committed to metaphysical functionalism and the associated MR thesis as well.
Newell, Shaw, and Simon express the view in this way:
We do not believe that this functional equivalence between brains and computers implies any structural equivalence at a more minute anatomic level (for example, equivalence of neurons with circuits). Discovering what neural mechanisms realize these information processing functions in the human brain is a task for another level of theory construction. Our theory is a theory of the information processes involved in problem solving and not a theory of neuronal or electronic mechanisms for information processing. (1964, 352 – quoted at Gardner 1987, 148; cf. Newell, Shaw, and Simon 1958a, 51)
And, from a more recent paper by Vera and Simon (a paper about situated cognition, no less!): “And, in any event, their physical nature is irrelevant to their role in behavior. The way in which symbols are represented in the brain is not known; presumably, they are patterns of neuronal arrangements of some kind” (1993, 9).
Moreover, when Simon discusses the functional equivalence of computer and human, he says that “both computer and brain, when engaged in thought, are adaptive systems, seeking to mold themselves to the shape of the task environment” (1996, 83). Taking such remarks at face value, Simon seems to be talking about the very things and properties in the world, as they are, not merely, for example, how they are best described for some practical purpose.
At the same time, we should recognize that Simon is often interested in matters methodological or epistemological, in how one should go about investigating intelligent systems or explaining the output of intelligent systems. He frequently talks about our interests, and about discovery, theory construction, and description. Even the equivalence referred to above could just as well be equivalence vis-à-vis our epistemic interests rather than equivalence in what exists independently of those interests.
Frequently these metaphysical and epistemic messages run parallel to each other (perhaps because Simon has both in mind, and the metaphysical nature of the properties and patterns in question determines the important epistemic and methodological facts). For instance, he says that
many of the phenomena of visualization do not depend in any detailed way [note the use of ‘detailed’] upon underlying neurology but can be explained and predicted on the basis of quite general and abstract features of the organization of memory. (1996, 73–74)
Although one might wonder about the nature of the dependence being discussed, the passage’s emphasis on the predictive and explanatory value of postulated features lends itself naturally to a primarily epistemological reading.
This is not an exercise in hair splitting. A failure to distinguish between the metaphysical and epistemic or methodological dimensions of computational functionalism can easily obscure the relation between computational functionalism as a theory of mind and computational functionalism as it’s sometimes been practiced, the former of which makes plenty of room for embodied cognitive science, even if some computational functionalists have not, in practice, focused on bodily contributions to cognition.
II.B.4 Adaptive rationality
The idea of adaptive rationality plays a central role in Sciences of the Artificial. After describing what they take to be the small number of parameters constraining human thought – including, for example, that human short-term memory can hold approximately seven chunks – Newell and Simon say:
[T]he system might be almost anything as long as it meets these few structural and parametral specifications. The detail is elusive because the system is adaptive. For a system to be adaptive means that it is capable of grappling with whatever task environment confronts it…its behavior is determined by the demands of that task environment rather than by its own internal characteristics (1971, 149; and see Simon 1996, 11–12)
On this view, we can, for many purposes, think of the internal workings of the cognitive systems as black boxes; but for a small number of parameters determined by their material constitution, human cognitive systems – in fact, any intelligent systems – are organized in some way or another so as to adapt to task environments. As intelligent systems they thus exhibit similar behavior across a wide range of circumstances (where goals are shared).