Orthodox cognitive science is associated with (although perhaps not fully exhausted by – see note 1) the views that (a) intelligence has its basis in computational processing and (b) there is a distinctively functional level of description, explanation, or reality apposite to the scientific study of cognition or the philosophical understanding of its nature. Many of Simon’s programmatic statements about mind, cognition, and cognitive science seem to reflect these core elements of computational functionalism.
II.B.1 Physical symbol systems
Simon endorses the Physical Symbol Systems Hypothesis (PSS) about intelligence (Newell and Simon 1976, 116), which appears to manifest an uncompromising computationalism. According to the PSS, only physical symbol systems are intelligent, and whatever they do (that’s sufficiently complex or organized) is the exercise of intelligence. What is a physical symbol system?
A physical symbol system holds a set of entities, called symbols…[It] possesses a number of simple processes that operate upon symbol structures – processes that create, modify, copy, and destroy symbols…Symbols may also designate processes that the symbol system can interpret and execute. Hence the programs that govern the behavior of a symbol system can be stored…in the system’s own memory, and executed when activated. (1996, 22, and see ibid., 19)
In a nutshell, intelligence is the activity of all-purpose, stored-program computers.
II.B.2 Computational models of problem solving
There is no doubt that Simon intends such claims to cover human intelligence in mind: as much as Simon’s description of physical symbol systems might put the reader in mind of artificial, computer processing, these operations “appear to be shared by the human central nervous system” (ibid, 19).
Moreover, it was clear from early on (Newell, Shaw, and Simon 1958b) that Simon saw the computational modeling of human intelligence to be an explanation of how humans solve problems. Simon and colleagues were out to demystify human thought; by appealing to information processing models, they meant to “explain how human problem solving takes place: what processes are used, and what mechanisms perform these processes” (ibid., 151). In describing their work on programs that, for example, prove logical theorems, play chess, and compose music, Simon and colleagues referred to their “success already achieved in synthesizing mechanisms that solve difficult problems in the same manner as humans” (1958a, 3).
Functionalism in philosophy of mind is, in a nutshell, the view that cognition is as cognition does, the idea that the nature of mental states and cognitive processes is determined by, and exhausted by, the role they play in causal networks, not a matter of their being constituted by some particular kind of materials. Simon unequivocally endorses a functionalist understanding of computing: “A computer is an organization of elementary functional components in which, to a high approximation, only the function performed by those components is relevant to the behavior of the whole system” (1996, 17–18). This theme continues over the pages that follow, with repeated comparisons to the human case: “For if it is the organization of components, and not physical properties, that largely determines behavior, and if computers are organized somewhat in the image of man, then the computer becomes an obvious device for exploring the consequences of alternative organizational assumptions for human behavior” (ibid., 21).
In Simon’s hands, the description of the functional states at issue – at least at the levels of programming relevant to cognitive scientific enquiry – lines up with our common-sense way of describing the domains in question (they are semantically transparent, in the sense of Clark 1989, 2001). This is reflected partly in Simon’s use of subject protocols to inform his computational modeling of human cognition (Simon and Newell 1971, 150, 152, Newell, Shaw, and Simon 1958b, 156), not merely as a stream of verbal data that must be accounted for (by, for example, our best models of speech production), but as reports the contents of which provide a reasonably reliable guide to the processes operative in problem-solving (cf. Newell, Shaw, and Simon 1958a, 40 n1). This is worthy of note because, in the philosophical imagination, functionalism has often been understood as a claim about the kind of mental states one can specify in everyday language and the appearance and operation of which, in oneself, can be tracked by introspection. This has tended to obscure the live possibility that one can construct computational models of at least fragments of human cognition (a) that deal in representations of features or properties that have no everyday expression in natural language and to which we have no conscious access and (b) that may well tell the entire story about the fragment of cognition in question. In other words, Simon’s work with protocols and his account of, for instance, the processes of means-end reasoning and theorem-proving – accounts that seem to involve representations that match naturally with the fruits of introspection – reinforce the image of Simon as a coarse-grained computational functionalist who makes no room for subtle, subconscious body-based processing. (Cf. Simon’s discussion of memory for chess positions [1996, 72–73]; Simon generates his description of the relevant cognitive process by auto-report and couches his account of the process in everyday chess-playing terms.) In contrast, many embodiment oriented models deal in features and processes difficult to express in natural language and better captured only mathematically or in a computational formalism that has no natural, everyday expression (cf. Clark’s  characterization of connectionist networks as detecting and processing microfeatures). A computational functionalism of the latter, fine-grained sort is computational functionalism nevertheless.