Until recently the consensus was that Logic was about deductive inference. Despite the formalisation of truth and the invention of model theory by Alfred Tarski, great importance was attached to ‘completeness proofs’. A completeness proof is a demonstration that the notion of semantic consequence (i.e. this will be true whenever that is true) can be simulated by some algorithm based on syntactic inference rules. Much fuss was made about different proof architectures, supporting different algorithms — Hilbert-style, Gentzen-style, natural deduction, semantic tableaux, resolution. And then something interesting began to happen in Artificial Intelligence. John McCarthy and others began to formalise common-sense reasoning.
Common-sense reasoning is what we use when we infer that, since something is a bird, it can fly. The inference is defeasible, since there are circumstances in which the conclusion would be false, for instance when the bird is a penguin, or is dead, or has had its wings clipped, or has got its feet stuck in wet cement. What sanctions such a defeasible inference is not a traditional rule of inference, for traditional rules of inference are obliged always to produce conclusions which will be true if the premises are true. Defeasible inferences are supported by heuristics, also called default rules. Informally, a default rule contains information to the effect that something is normally thus and so: for example, if something is a bird, then it is normally able to fly. Thus a default rule differs from a universal generalisation (say, ‘All birds are able to fly’) by permitting the existence of exceptions.
It should be noted that default rules need not be, and usually are not, probabilistic or statistical in nature. When a human driver enters an intersection because she has defeasibly inferred, from the fact that the traffic light for cross traffic had become red, that the cross traffic will stop, then that inference is not usually supported by tables of relative frequencies from which earnest calculations were made. In general humans do not use numerical calculations to support their common-sense inferences. When a human fielder moves to catch a cricket ball hit high into the air by an enterprising batsman, the trajectory of the ball is not being plotted by differential equations — instead the fielder is using a simple heuristic: keeping the angle between the eyes and the horizon constant (McLeod and Dienes 1996). The heuristic can be shown numerically to lead to the ball eventually being caught in front of the eyes, but of course this mathematical demonstration has nothing to do with the common-sense inference of the fielder.
The formal representation of default rules in Logic, and the definition of the consequence relations which connect premises to the conclusions that may defeasibly be inferred according to the default rules, has been accomplished in various ways, of which the most successful and impressive are the nonmonotonic logics having a preferential semantics. Such logics originated in McCarthy’s idea of circumscription (McCarthy 1980, 1986). In circumscription, a predicate, ‘is abnormal,’ was allowed to have different extensions in different models, allowing those models in which the extension was minimal to be differentiated as the most normal (or least abnormal) models. In effect, models of a given premise (e.g. Tweety is a bird) are ordered in a way that reflects the relevant default rule, and a conclusion (e.g. Tweety can fly) is justified if it is true in all the minimal (i.e. most normal) models. An elegant generalisation of circumscription was achieved by Yoav Shoham and subsequently refined by Lehmann and Magidor (Shoham 1987; Kraus, Lehmann, and Magidor 1990, Lehmann and Magidor 1992). The general approach is called preferential semantics because default rules are represented by preference orderings of the states of the world, and the consequence relation of defeasible entailment is defined by requiring that if and only if is true in all the models of that are most preferred according to the relevant ordering.
Because every default rule is represented by a different preference ordering on states of the world, there is no way, even in principle, to simulate the defeasible consequence relation by an algorithm based on syntactic inference rules. The shape of a symbol has simply no connection with the notion of normality or preference that may apply. The logics which formalise common-sense inferences have thus been liberated from, or deprived of (according to one’s taste), the assumption that Logic is about deductive inference and completeness proofs. No longer are premise and conclusion connected, even in principle, by a sequence of steps each representing the transformation wrought by a syntactic rule. No longer can the logician monitor the movement from premise to conclusion as it inches along the sequence of intermediate steps. Instead the defeasible consequence relation contains a number of one-step transitions from premise to conclusion, and each of these single-step transitions is supported by a preference, or disposition, realised as an ordering on states of the world. Thus nonmonotonic logics based on preferential semantics formalise intuition, not conscious deduction!
The development of such nonmonotonic logics has occurred without much attempt to base them on a systematic psychological foundation. But at the same time, a psychological foundation has begun to emerge as a by-product of neurobiological and psychological research. It is possible that, as we learn more about the dispositional representations via which somatic markers exercise their influence, the kinds of ordering relations used in nonmonotonic logics may change. One may anticipate that the semantics of nonmonotonic logics will be elaborated so as to provide for explicit representations of the momentary thought-action repertoires of the agent as these are mediated by whatever preference (= dispositional) ordering has been activated. At present, nonmonotonic logics are provided with default rules a priori, and at some point one would like to see a mechanism for the formation of such rules on the basis of the agent’s interactions with its environment to be developed.
Nonmonotonic logics with preferential semantics are closely related to AGM belief change theory (Alchourrón, Gärdenfors, and Makinson 1985), although the development was independent. It has been shown that every AGM belief revision operation can be carried out by simply taking the defeasible consequences according to an appropriate preference ordering (Meyer 1999 p75). Not only does this provide additional evidence that preferential semantics is the right way to formalise defeasible inference, it also provides a potentially fruitful perspective on a long-standing problem in belief change theory. AGM theory explains how an agent who is in a given ‘epistemic state’ should revise her beliefs upon receipt of new information. But what should the new epistemic state of the agent be? The new state is required if the agent is to be capable of iterated revision rather than being restricted to one-shot revision. Since epistemic states correspond to preference orderings, which represent default rules, it follows that the problem of iterated belief revision is just a different form of the problem of how default rules may be formed.
Logic-based AI has become something very different from Symbolic AI, since the emphasis in Logic has moved away from deductive inference and towards semantics in general, preference orderings in particular. Thus logic-based AI is compatible with the role embodiment increasingly plays in AI. Once again, there is tremendous potential for a psychologically realistic foundation to play a role. Ever-increasing attention is paid in robotics to recognising, simulating, and even incorporating emotions (or emotion-analogs, if the reader prefers) (Picard 1998). But existing attempts lack the unification of emotions, drives, and motivations that can be achieved when a primordial body map is used as the basis for notions of affect, somatic markers, and momentary thought-action repertoires. A description of how artificial agents equipped with an architecture for somatic markers would differ from existing software and hardware realisations is given in (Heidema and Labuschagne, 2004). Such ‘emancipated’ agents would be able to work co-operatively in multi-agent societies rather than psychopathically seeking to maximise their own advantage, since the architecture required for somatic markers would allow social emotions such as sympathy and shame to modulate behaviour. And most remarkably, such emancipated agents would display intuition. Arguably, this constitutes one meaningful criterion for the achievement of Strong AI.