A meta-level argumentation framework for representing and reasoning about disagreement Mandy Haggith Phd thesis The University of Edinburgh



Download 1.37 Mb.
Page4/17
Date31.05.2016
Size1.37 Mb.
#74379
1   2   3   4   5   6   7   8   9   ...   17

2.2 Argumentation
The second body of literature relevant to the study of disagreements concerns the way in which we produce arguments for our various points of view. The construction and criticism of arguments is the fundamental activity of debate and is thus crucial to any computer-based debating system.
2.2.1 Argumentation in philosophy
2.2.1.1 Traditions of argumentation
Greek beginnings
Any survey of the argumentation literature must begin with Aristotle (see, for example, [Aristotle 84] and [Evans 77]), the first great logician, who first systematically laid out the form of valid argument in the shape of the four syllogisms. A recognition of Aristotle’s work is relevant here for two reasons, not only because his work concerned the structure of arguments, but also because the spirit of his enterprise was closer to the spirit of this one than much of twentieth century logic. Aristotle was concerned with argument structures, not as a purely formal and abstract study, but because he sought to strengthen and facilitate the practice of debate in Greek society. To Aristotle, the syllogistic forms were not idealisations, nor rules for manipulating symbols, they were clarifications of the real, practical arguments used in everyday fora of criminal justice, land allocation, political decision making and theological debate1.
Thus the early days of logic provided structures for assessing the arguments and disagreements of the real world. Only comparatively recently has logic become a formal study in its own right.
The formalist tradition
This practical emphasis in Aristotle’s work has been largely neglected by later generations of formal logicians, in particular since the ‘mathematisation’ of logic by Boole and of linguistic argumentation by Frege. Frege’s Begriffsschrift (concept-script) was an explicit attempt to subsume all sound reasoning by a function-theoretic calculus [Frege 50]. By comparison, natural language appeared defective, as the following commentary on Frege’s work makes clear:

Natural language, [Frege] thought, is rife with vagueness, ambiguity, lack of logical perspicuity, and, indeed, logical incoherence. To a large degree he identified as ‘logical defects’ in a language those features of it which fail to correspond with the articulations of his concept-script. The logical powers of concept-script in the presentation of arguments so far outstripped anything hitherto available that Frege unwittingly employed his invention as a yardstick against which to measure natural language’ [Baker & Hacker 84].


The same view still appears to be held by many artificial intelligence researchers today, and is reflected by the efforts of the natural language processing research community to capture the structure and semantics of natural language in formal logics which enable language to be processed (both understood and generated) by computer programs.
The elegance of Frege’s formal systems led increasingly to a perception of natural language as inadequate for the expression of conclusive arguments and to the Russellian cult of the ‘logically perfect language’, of which Russell said ‘a language of that sort will be completely analytic, and will show at a glance the logical structure of the facts asserted or denied’ [Russell 56]. The implication is that the practical business of wrangling over disagreements lies outwith the boundaries of logic, because the very existence of disputes is rooted in the imperfections and vaguaries of the natural language used to express them. It is thus unsurprising to read Russell’s dismissal of Aristotle’s conception of logic:
I conclude that the Aristotelian doctrines .. are wholly false with the exception of the formal theory of the syllogism, which is unimportant. Any person in the present day who wishes to learn logic will be wasting his time if he reads Aristotle or any of his disciples.’ [Russell 46].
The interpretivist tradition
Fortunately, the story does not end there. Throughout Western Philosophy there has been an interest in debate or dialectic, and a dialectic tradition of philosophical discussion championed by philosophers such as Kant and Hegel. Kant was the first great meta-philosopher. At a time (the eighteenth century) when all great philosophers produced monumental theories to explain the way the world is, Kant created his own treatise on ‘Pure Reason’ [Kant 33]. But as well as being a conventional philosopher in this sense, he was also a professor and historian of philosophy, and thus also wrote about how others explain the world, and how those explanations relate to each other via the process or dialectic or argumentation. Kant aimed at untangling the huge philosophical controversy between the Rationalist tradition (headed by Descartes) who believed in the primacy of reason in our search for explanations of the world, and the Empiricist tradition (led by Locke and Hume) who believed that we must base our understanding of the world on our senses and experience. Kant’s lasting contribution to this debate was to articulate the two sides of the argument, and although he also proposed an alternative way of explaining the world, this is less important here than the fact that he talked about the debate itself.
Hegel directly continued this enterprise of meta-philosophy, denying the existence of any ‘truth of the matter’, asserting instead that all truth is subjective, and relative to the time, place and culture in which we find ourselves. To Hegel, truth flows and changes through history by dialectic (see [Rosen 82]), which is a movement from a claim (or thesis) via its rebuttal (or antithesis) to a new position (or synthesis). He was the first interpretivist, claiming that no fact holds merely in itself but rather its truth must be determined relative to some context. Later interpretivists such as Heidegger go further, claiming that not only the truth, but also the meaning of statements is relative to a historical or personal context within which it must be interpreted before it can be understood.
Thus in much twentieth century philosophy, the role of disagreement is important, and the occurrence of different points of view is considered a part of the human condition, necessarily the case due to the variety of cultures and historical perspectives in human society. Argumentation and dialectic are central to our reasoning faculties. But in formal logic and mathematical philosophy as articulated by Russell, disagreements and inconsistencies are deeply problematic. The work of Wittgenstein spans these two schools of thought. His early work in the Tractatus Logico-Philosophicus [Wittgenstein 22] demonstrates his formalist philosophical beginnings, whilst his later work and particularly Philosophical Investigations [Wittgenstein 58] represents a radical shift towards an interpretivist stance with its emphasis on the instrinsic importance of the immediate context (or language game) to the meaning of everything we say.
The explosion of the study of knowledge, reasoning, logic and artificial intelligence, in the second half of the twentieth century, is rooted in the dynamic and controversy resulting from these two competing approaches - interpretivism and formalism.
2.2.1.2 Toulmin’s practical reasoning
In the 1950s there was a backlash against the formalist monopoly of logic, led by the philosopher of science, Stephen Toulmin. Toulmin’s central concern is the needs of professionals in fields such as law and the natural sciences to assess the conclusiveness of arguments in their domain and to carry out rigorous debate. Toulmin believes that the mathematical trends of formal logic have short-changed lawyers, scientists and certainly ordinary people, and in his book The Uses of Argument [Toulmin 58] he reasserts the practical nature of the logical enterprise.
His principal focus of attack is the Russellian claim that the only valid arguments are analytic ones, which when expressed in a ‘logically perfect language’ can be seen at a glance (or by a computer) to be sound. Instead he asserts the need for logic to encompass non-analytic, or substantive arguments which can significantly add to our knowledge of the world. An important effect of the artificial constraint of analyticity of arguments in logic, he says, is that this has caused apparantly insurmountable problems in the field of epistemology, or the study of knowledge. Those things which we claim to know, yet can justify only by substantive, non-analytic arguments (for example, results of induction in science or the reports of eye-witnesses in court) appear epistemologically dubious when our yardstick for accepting them are the analytic arguments of formal logic. Toulmin’s claim is that this should cast doubt not on the substantive arguments but on the claims of formal logicians to be accounting for sound reasoning. ‘The only real way out of these epistemological difficulties is .. giving up the analytic ideal.’
The reason that Toulmin’s work is important to this thesis is partly because his vision of a new field of logic is remarkably close to the reality of what is now studied in artificial intelligence. This new field has the following three requirements:
(i) the need for a rapprochement between logic and epistemology, which will become not two subjects but one only;

(ii) the importance in logic of the comparative method - treating arguments in all fields as of equal interest and propriety, and so comparing and constrasting their structures without any suggestion that arguments in one field are ‘superior’ to those in another; and

(iii) the reintroduction of historical, empirical and even - in a sense - anthropological considerations into the subject.’[Toulmin 58]
The importance of logic in AI and the claim that it is ‘experimental epistemology’ are indicators that AI is meeting the first need. The explicit and expanding level of interest in practical applications of AI, for example the representation of arguments from medicine, law, chemistry, geology and resource management in knowledge based systems, is an indication that the second requirement is also being met. The third claim, of the need for empirical, historical or anthropological study is being vindicated by recent interest in the situtatedness of intelligent behaviour and the cultural aspects of communication.

Having established the relevance of Toulmin’s work to AI, it is worth looking more closely at his analysis of the structure of arguments, and his criticisms of the approaches of mathematical logic. Like Wittgenstein, Toulmin grounds his investigation in our everyday language, and thus many of his most telling points derive from close scrutiny of the idiomatic way in which we express ourselves. It is this sort of scrutiny which leads him in his essay ‘The Layout of Arguments’ [Toulmin 58] to challenge some of the unjustified simplifications he believes pervade logicians’ work.


He starts with a metaphor : ‘An argument is like an organism. It has both a gross anatomical structure, and a finer, as-it-were physiological one’. Toulmin wishes to challenge some of the assumptions made at the ‘physiological level’, in particular the notion of logical form and the traditional way in which we carve up and label the parts of arguments. The result is an alternative model of the structure of an argument to the ‘premiss set leading to conclusion’ model which pervades the study of logic.
Take the syllogism:
(1) Wendy was born in the Falkland Islands.

(2) All people born in the Falkland Islands are British Citizens.

(3) Therefore, Wendy is a British Citizen.
Traditionally, (3) would be called the conclusion, (1) the minor premiss (being about a specific named individual) and (2) the major premiss (being a general rule). Although Toulmin does not press the point, most formal logics (and rule-based systems) retain this structure, and provide general inference rules (for example, elimination of the universal quantifier, followed by modus ponens) which enable this argument to be classed as valid by virtue of its form. The syntactic structure of the argument is emphasised, and the fact that it is about the Falkland Islands, Wendy and notions of citizenship is sidelined as irrelevant to its validity.
Toulmin introduces a more complex representation of this sort of argument by which he points out distinctions between different argument-types, which are lost under the syllogistic or classical interpretation. The following terminology is introduced :
A claim is the conclusion of the argument, the statement for which justification is required, in this case ‘Wendy is a British Citizen.’
A datum is a statement of fact offered in evidence for some claim, in this case, ‘Wendy was born in the Falkland Islands’.
A warrant is a general rule or principle which is used to support the step from a datum to a claim, it is a justification that such a step is legitimate, in this case ‘All people born in the Falkland Islands are British citizens’.
A backing is a support for a warrant, for example, a reference to some Act of Paliament which states the citizenship of Falkland Islanders. A crucial factor for Toulmin in distinguishing warrants from backings is that backings vary widely depending on the field of discourse - so general warrants about legal matters should be expected to have backings of a very different sort to the backings for a general warrant in the physical sciences.
A rebuttal condition is a statement of the condition under which we would not expect a general warrant to hold, for example ‘unless Wendy is a naturalised citizen of another country’.
A qualifier is a modal expression describing the applicability of the warrant, for example ‘necessarily’, ‘presumably’, or ‘possibly’.
Thus Toulmin would lay the argument out as follows :

Wendy was born in ----------------------> so, presumably, Wendy is a British citizen

the Falkland Islands | |

| |


Since Unless

Anybody born in the she is a naturalised

Falkland Islands is a citizen of another country

British citizen

|

On account of



Statute number ....

This fits a general template as follows :


DATUM ----------------------> so, QUALIFIER, CLAIM

| |


| |

Since Unless

WARRANT REBUTTAL

|

On account of



BACKING
This layout serves an important purpose which is to enable Toulmin to demonstrate how the distinction between warrant and backing reveals subtle differences between arguments which would be represented as the same in classical or syllogistic logic (or indeed, in rule-based systems). He is particularly suspicious of the use of universal quantification in conflating two different sorts of generality - those resulting from past experience or empirical study (‘according to a survey, 95% of Falkland Islanders are British citizens’) as opposed to claims about appropriate deduction (‘You can safely assume anyone born in the Falklands is entitled to Critish citizenship’). The former is a backing, whereas the latter is a warrant. To Toulmin it is vital to make this distinction because it will depend on the circumstances of a particular argument whether the former could be accepted as an appropriate backing for the warrant ‘All people born in the Falkland Islands are British Citizens’. In a court of law, for example, it is unlikely to stand up. In a demographic study it might suffice. Most importantly in these two cases we would apply different criteria in assessing its veracity. (As an aside, it is also worth noting how closely the notions of warrants and rebuttal conditions correspond to the notions within AI of defaults and exceptions.)
In summary, then, Toulmin was led to doubt whether ‘the traditional pattern for analysing micro-arguments - ‘Minor Premiss, Major Premiss, so Conclusion’ - was complex enough to reflect all the distinctions forced upon us in the actual practice of argument-assessment’ and to conclude that many fundamental problems in the logical tradition (for example the problem of attaining knowledge from non-analytic arguments) result from ‘this vast initial over-simplification’.
Toulmin’s position has been presented at length here for various reasons. Firstly, it has influenced several researchers in AI recently, as will be seen below. Secondly, Toulmin’s work is rarely articulated in any detail and thus is not as well known as it should be, perhaps due to its radical stance (the same can be said of Wittgenstein). Thirdly, this thesis contains analysis of higher-order argumentation structures which capture some of Toulmin’s requirements, for details of which see section 5.3.
2.2.1.3 Quine’s Rationalism
Toulmin would perhaps be pleased by Quine and Ullian’s attempt [Quine & Ullian 78] to bring logical analysis to the notice of people involved in real world argumentation. Their book, however, is a rather unsettling mixture of dogmatic rejection of ‘anti-scientific’ projects such as theology, coupled with wise advice to be open to arguments and changes to our belief systems. They provide detailed advice, in the form of six ‘virtues’ which we should strive towards in our analysis of hypotheses about the world. These six virtues have been cited by researchers into the coherence approach to belief revision (see section2.1.3) as support for various characterisations of epistemic entrenchment. They are :
1. Conservativity : aiming to make minimal changes to the overall belief set.

2. Modesty : hypotheses should be the minimal required to explain observations.

3. Simplicity : hypotheses which are simply statable should be preferred over complex ones.

4. Generality : hypotheses which explain many observations should be preferred over those which explain only particular observations.

5. Refutability : there should be ways of subsequently proving a hypothesis to be false.

6. Precision : hypotheses which provide predictions which can be quantitatively tested should be preferred over qualitative generalities.


There are obvious tensions between these virtues, as shown, for example, by Einstein’s theory of relativity which though simple and general, was neither modest nor conservative at the time, as it involved revising many basic physical beliefs about the nature of time and space, the ether and the applicability of Newtonian mechanics. Quine and Ullian relate situations in which their first two virtues can be over-ridden to Kuhn’s notion of scientific revolution or paradigm shift [Kuhn 62].
In the final chapter of their book, Quine and Ullian provide some strategies for arguments. For example, ‘To convince someone of something we work back to beliefs he already holds and argue from them as premises’. However, ‘often there is also a negative element to contend with : actual disbelief of some of the needed premises’, ie: disagreement. In this case there are two strategies. We can either attempt to overwhelm our listener by ‘adduc[ing] such abundant considerations in favor of our thesis that we end up convincing the man in spite of his conflicting belief.’ This strategy relates to the generation of arguments, corroborations and enlargements in the terminology of this thesis. The second strategy is to undermine our listener’s arguments. ‘We must directly challenge his conflicting belief... If he meets the challenge by mustering an argument in defense of that belief, then we attack the weakest of the supporting beliefs on which he rests that argument’. This corresponds directly to Toulmin’s notion of undercutting of an argument. An alternative characterisation of undermining, in terms of the more indirect attack on elaborations of a position is given in section 5.3.
An interesting acknowledgement of the dynamism of debate is worth including here : ‘What may occasionally happen is that our challenge to the conflicting belief is met by so able a defense that we find ourselves persuaded. In this event we are led to give up the very belief that we originally sought to propagate.’ It is important to note that this account begs some crucial questions, such as how we assess weakness of supports (in order to aim our challenges where they are likely to succeed) and what factors constitute a defense being strong enough to convince us to change our mind. These are now central questions in belief revision research.
To conclude this section of some philosophical approaches to argument here is Quine and Ullian’s inspired ‘gardening metaphor’ for debate:
To maintain our beliefs properly even for home consumption we must attend closely to how they are supported. A healthy garden of beliefs requires well-nourished roots and tireless pruning. When we want to get a belief of ours to flourish in someone else’s garden, the question of support is doubled : we have to consider first what support sufficed for it at home and then how much of the same is ready for it in the new setting.

2.2.2 Argumentation in AI
Given the centrality of reasoning in artificial intelligence research, it is not surprising that argumentation has received some attention in AI as a reasoning technique.
The first area of relevant work concerns the maintenance of consistency amongst large sets of interrelated beliefs or propositions. The solutions to this problem presented in [Doyle 79] and [de Kleer 86], ie: truth or reason maintenance systems, and also belief revision systems, bear a strong similarity to the work presented in this thesis. They involve reasoning, in the face of inconsistency, about the chains of justifications used to support propositions in a network. They differ from the work here in that their aim is always to restore consistency, so the existence of disagreements or inconsistencies is always only fleeting, and always resolved away. To do this, all propositions (with their justifications) are explicitly tagged as either believed or not, (IN or OUT is the usual terminology - ie: in, or out of, the current set of believed, or well supported, propositions). Two inconsistent propositions cannot both simultaneously be IN and sophisticated truth maintenance algorithms have been devised to resolve such conflict situations. This effectively restricts the network of propositions to a representation of a single consistent viewpoint on the world, albeit one which knows about other possible consistent views which could be, but which are not currently, believed. For more discussion of such systems see section 2.1.3.
Another important concern of AI research is the problem of uncertainty. Since the earliest expert systems, such as MYCIN [Buchanan & Shortliffe 84], were developed, it has been realised that ways are needed to reason with information which is uncertain, or with statements which are believed only to a certain degree, and that conclusions which are drawn on the basis of such reasoning should be qualified appropriately with some degree of belief or uncertainty. Most of the mainstream approaches to this problem have used the mathematical tools of probability theory, such as Bayes Theorem, Dempster-Schafer theory and so on, sometimes in modified or simplified form (eg: the manipulation algorithms for uncertainty handling in MYCIN). These require the level of belief in statements to be quantified, as some kind of probability, and algorithms to be devised to manipulate these quantities as reasoning is carried out, enabling the uncertainty levels of premises and inference rules to be propagated so that conclusions reflect this uncertainty. Where multiple conclusions can be drawn the numbers allow comparisons between them to be made as to which is most reliable. Such quantitative approaches to uncertainty will not concern us further here. Although they are useful in certain contexts, it has been widely argued (see next section) that such quantitative approaches to reasoning about uncertainty are limited, and that richer, symbolic and qualitative representation tools can be used to provide a more plausible account of how we actually reason in the face of uncertainty.

Download 1.37 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   17




The database is protected by copyright ©essaydocs.org 2023
send message

    Main page