Natural and Artificial Cognition On the Proper Place of Reason


Artificial intelligence and deduction



Download 144.18 Kb.
Page2/7
Date20.01.2021
Size144.18 Kb.
1   2   3   4   5   6   7
2. Artificial intelligence and deduction

John McCarthy invented the term ‘Artificial Intelligence’ (AI) at the Dartmouth conference in 1956. The research field denoted by the term was understood in a manner informed by a vision of human cognition very similar to Dennett’s. The Strong AI Hypothesis is the conjecture that artificial agents can be designed so as to function in a manner that is equivalent, in some yet to be adequately defined sense, to human cognition, and the hypothesis has been hotly debated. There has been less debate about the supposed features of human cognition, which was assumed to proceed by deductive inference.

Admittedly, humans also make use of a more informal process, a kind of holistic pattern-matching often called analogy. However, the use of analogy has proved to be difficult to explicate, and difficult to simulate by computer programs. Creditable attempts at explication have been made (Hofstadter et al. 1995; Holyoak and Thagard 1995), but it is fair to say that very little in the way of general agreement has been reached in the AI community (Hofstadter 1995; Forbus et al. 1998).

Deduction is a more attractive target for explication than analogy is. Deductive reasoning is a step-by-step process of moving from sentence to sentence as sanctioned by inference rules. For example, a human agent whose database of beliefs contains both the sentence “John is snoring” and the sentence “If John is snoring then John is asleep” would be justified in adding to the database the belief “John is asleep”, because the rule of inference modus ponens legitimises the detachment of a consequent  provided the conditional sentence If  then  and the antecedent  are both available as premisses. Logic was seen for most of the 19th and 20th centuries as the study of rules such as modus ponens, and researchers who focused on deduction could draw on this logical heritage. In one of the high points of the 1956 Dartmouth conference, Simon and Newell took the first step towards lasting fame when they presented a program called Logic Theorist, which “was able to prove theorems in Whitehead and Russell’s Principia Mathematica, a feat of intelligence by anybody’s standards … They alone had managed to do what everyone at Dartmouth had faith was possible but had been unable to accomplish: they had made a machine that could think.” (McCorduck 1979 p104)

The favourable judgment accorded the Logic Theorist reflects a belief that while analogy is all very well as a source of ideas, creativity, and imagination, nevertheless rational decision-making in humans relies on deductive inference. Since deductive inference involves the syntactic manipulation of symbolic representations according to rules that operate on the basis of the shapes of symbols rather than what the symbols may stand for, programs which simulate human reasoning by rule-based symbol-manipulation seem obviously on the right track. Let us refer to the assumption that rational decision-making does or should rest primarily on deductive inference by the phrase cognition as conscious deduction. This assumption has a venerable history. During the seventeenth century Leibniz had advocated the development of an all-encompassing deductive theory consisting of a characteristica universalis or universal scientific language, a calculus ratiocinator or comprehensive set of rules whereby the symbolic expressions of the language could be transformed, and an ars combinatoria, a comprehensive set of definition-templates for the formation of new concepts. Disputes would then be settled by men of good will taking up their pens and saying “Calculemus — Let us calculate” (Heidema 1979). The best of human cognition was far removed, in this vision, from the blood and guts of emotional human beings, as one might expect in a philosophical landscape that had been shaped largely by Descartes’ separation of mind (res cogitans) from body (res extensa).

Cognition as conscious deduction gives priority to a linguistic process by which sentences are transformed according to inference rules. The application of the rules is consciously trackable, controllable by acts of will, and effortful — it is a characteristic of deduction that it demands attentional resources lest it deviate from the logical ideal, leading to the commission of formal or informal fallacies, or the violation of the laws of probability (Kahneman, Slovic, and Tversky 1982).

In partial harmony with the view of human cognition as conscious deduction is the approach called Symbolic AI and exemplified by Logic Theorist. A respected paradigm, Symbolic AI is founded on the assumption that intelligent behaviour can be achieved by agents who are provided (only) with declarative information (that is, sentences of some knowledge representation language) as well as an inference engine (in other words, an algorithm for applying rules of inference to the sentences). An alternative name for Symbolic AI is the physical symbol system hypothesis, which refers to the classic exposition of this paradigm by Newell and Simon (1976). A modern exposition is given in the textbook by Genesereth and Nilsson (1987). Symbolic AI has influenced the way in which researchers conceive of the generalisation from single agents to multi-agent systems (Konolige 1986). Symbolic AI simplifies cognition as conscious deduction by dropping the requirement of consciousness, and advocates of Symbolic AI have tried to explain consciousness away (Dennett 1991).

Symbolic AI has not been free of criticism. Experimentally, it has become clear that deduction without consciousness suffers from the lack of the notion of relevance that accompanies ‘attention paid to’, so that automated reasoning systems exhibit a resultant, and often crippling, inefficiency. The more successful automated reasoners, such as OTTER, cope with the generation of irrelevant inferences by inviting human agents to act as partners and to guide the inferential process by such means as imposing different weights on different symbols, leading to different priorities in the selection of sentences to be transformed (Wos and Pieper 1999). This restores consciousness, via the human agent, to the deductive process, inviting the question whether a useful notion of relevance can be gained without such a full return to consciousness and its associated mechanism of attention.

In a more theoretical vein, the Chinese Room thought experiment (Searle 1990) examines a scenario in which a man who knows no Chinese is enclosed in a room having rules written on the walls — in English, the man’s supposed native language — for transforming Chinese symbols. If the man were, with the aid of these rules, to respond to messages in Chinese by sending out Chinese symbols that made perfect sense to a Chinese-speaking recipient, would we consider the processing of the symbols to be intelligent behaviour? For Searle and many others it seems clear that the man’s uncomprehending execution of the transformations legitimised by the rules would not constitute intelligent behaviour but, at best, an artful simulation thereof. Another theoretical criticism (Penrose 1989, 1995, 1997) applies Gödel’s incompleteness theorem to demonstrate that human understanding cannot fully be captured by deduction. Gödel’s theorem may be understood as exhibiting a number-theoretic sentence which is demonstrably true of the natural numbers but cannot be deduced by the standard rules of inference from the standard sort of axioms. Both Searle’s Chinese Room and Penrose’s application of Gödel’s incompleteness theorem are aimed at demolishing Symbolic AI, and while they may not have accomplished this aim conclusively both have generated a great deal of discussion (Searle and commentators, 1980, 1992; Preston and Bishop 2002; LaForte et al. 1998).

The experience gained from automated reasoners like OTTER, thought experiments like the Chinese Room scenario, and Gödel’s theorem can all be seen as non-psychological evidence suggesting that there is more to cognition than can be captured by mere inference rules. In the sections to follow we examine research in psychology that bears upon the assumption that cognition is conscious deduction. First we review evidence that the decision-making which governs our lives cannot all be consciously guided. Then we discuss, from a neuropsychological perspective, the role of non-symbolic representations. Finally a description is given of the mechanism by which both relevance and analogy enter into cognition, the action of which mechanism we call intuition. Having completed the psychological exploration, we muse on some of the implications for Logic and AI.





Share with your friends:
1   2   3   4   5   6   7




The database is protected by copyright ©essaydocs.org 2020
send message

    Main page