A meta-level argumentation framework for representing and reasoning about disagreement Mandy Haggith Phd thesis The University of Edinburgh



Download 1.37 Mb.
Page5/17
Date31.05.2016
Size1.37 Mb.
#74379
1   2   3   4   5   6   7   8   9   ...   17

2.2.2.1 Cohen’s Model of Endorsements
An important example of an argument against the adequacy of probabilistic approaches to uncertainty, and presentation of a qualitative uncertainty representation technique, is provided by Cohen’s work on endorsements [Cohen 85]. My main interest here in Cohen’s technique is that it involves meta-level reasoning. He argues that it is not sufficient merely to propagate numerical uncertainty factors whilst reasoning. Instead he advocates reasoning about the nature of the uncertainty in order to facilitate reasoning with the underlying uncertain statements. To do this, the uncertainty in a statement or inference rule is not given a numerical degree, but instead it is described by further statements (which Cohen calls endorsements) which give reasons for belief or disbelief, such as qualitative descriptions of the reliability of the information used to support the belief; they state to what degree and in what contexts the uncertainty is important; and they provide techniques for reducing or resolving it.
Cohen’s endorsements can thus be viewed as a representation which enables argumentation amongst a set of possibilities. Indeed Cohen freely describes the reasons for being more or less certain of a statement (ie: its endorsements) as arguments for and against those beliefs, and he uses examples of arguments to motivate his model and to provide him with examples of particular endorsements.
For example, he analyses in some detail the anthropological arguments presented by Walker and Leakey in an article in Scientific American, addressing various hypotheses about the number of different species represented in fossil remains found in East Turkana in Kenya. Three types of skulls were found, described as robust, gracile and erectus. The hypotheses are that these represent either one, two or three distinct species. They provide evidence relevant to the decision and they combine subsets of the evidence into arguments for and against the various hypotheses.
Cohen analyses these arguments according to how they contribute to the task of weighing up the evidence for and against each hypothesis, which he claims is a process like account-keeping.

The two steps in analyzing arguments in the model of endorsement are to decide which column of the ledger-book an argument belongs in and to do the ‘accounting’ of arguments.’


To facilitate this he represents the arguments as conditionals in predicate logic, reasons with them using a backward chaining rule interpreter, and attaches them as endorsements to the statement representing their conclusion.
Arguments are only one aspect of endorsements. They provide good reason for believing a conclusion. Cohen’s endorsements also include statements of the reliability of information (eg: ‘Premise is highly believable’, ‘data reliability = poor’ ) and of relationships between arguments (eg: ‘Corroboration between this argument and the previous one’) which are used by his system (SOLOMON) to rank conclusions by comparing their endorsements. This is the main way in which Cohen’s system differs from truth maintenance systems (in which the kind of support for a proposition is irrelevant). In Cohen’s work, the nature of the support (or endorsement) is crucial for this ranking process.
A key difference between Cohen’s model of endorsements and FORA is that in FORA arguments are not used for deciding the level of confidence we can have in their conclusions. Another is that in FORA the representation of arguments is more abstract - it does not involve representing them as conditionals in first order predicate logic, nor restrict reasoning to backward chaining or any particular object-level inference mechanism. There is a clearer distinction made here between object-level decisions such as these and the meta-level representation of arguments. As a result of this, notions such as corroboration between arguments, which Cohen represents as just another form of endorsement, are defined formally and in general at the meta-level and can thus be recognised automatically by the system, rather than having to be tagged by hand onto conclusions in the rather inelegant way used in Cohen’s model (‘Corroboration between this argument and the previous one’ is added as an endorsement to a new conclusion identical to one drawn already using a different argument, and Corroboration between this argument and the subsequent one’ is added to the first conclusion). It is thus possible that FORA could provide some useful extra facilities in endorsement-style uncertainty handling, however this has not been attempted in the current work.
To summarise, Cohen contends that faced with uncertain conclusions we do not merely compare our levels of uncertainty in them, but that we jump to the meta-level and reason about the uncertainty itself, in particular looking for ways in which we could seek more information to strengthen or weaken our belief in one or other conclusion. Cohen’s work is similar to the work here in that it rejects the notion that inconsistencies should be immediately resolved (by numerical comparison of probabilities, for example), and advocates using them as an opportunity for further reasoning about the point at issue. His work is also interesting because he recognises that our levels of belief are not absolute and that reasons for believing a statement will be convincing in one situation but not necessarily in another. An argument for a belief is thus not instrinsically good or bad, it is more or less persuasive depending on the context it is used in, who it is used by and so forth.
2.2.2.2 Argumentation at the Imperial Cancer Research Fund
At the Imperial Cancer Research Fund (ICRF) in London, there is a long standing interest in using AI techniques to support the assessment of cancer risks, diagnosis of cancer in patients and clinical treatment decision making. All three of these tasks involve reasoning in the face of uncertain information. The ICRF’s research has produced a body of work which, like Cohen’s, moves away from traditional probability-based uncertainty handling, and instead attempts to bring rigour to the process of reasoning with qualitative or semi-quantitative measures of uncertainty [Fox and Krause 92]. One thread which runs through this research is the use of argumentation as a way of weighing up conflicting or inconclusive evidence about alternative diagnoses, risk-assessements or treatment regimes (eg: [Fox and Clark 91]). They have successfully demonstrated that it is advantageous to move away from classical decision theory (such as [Lindley 85]) in which which decisions are made by weighing up quantitative probability and utility functions, towards a qualitative, argument-based approach to decision making as advocated by Toulmin [Fox et al 92]. Some of their more recent research concerns the use of multiple cooperating agents to represent the various points of view which need to be taken into account when making rigorous medical decisions [Fox et al 94], [Huang et al 94], [RED 94].
The most significant result of their research from the point of view of this thesis, is the Logic of Argumentation, LA [Krause et al 95a], which is based on a subset of intuitionistic logic involving only ‘conjunction’ and ‘implication’, known as ‘minimal logic’. LA is an extension of this logic in which propositions are labelled with formal representations of arguments for them. An argument for a proposition is a lambda-calculus structure representing the form of a proof of the proposition, however, arguments can be constructed in this logic which may not in fact be logically sound (eg: they may be based on invalid assumptions). Thus these argument representations look like proofs, but may not actually have the force of logical proofs. Negation (!P) is interpreted as PfB, where B means ‘contradiction’. In most interesting cases it is possible to construct arguments for a proposition and also for the negation of the proposition from two different consistent sub-theories in the logic. Arguments are rejected if they are constructed from inconsistent sub-theories, which ensures that it is not possible to carry out the classical pathological derivation of anything from a contradiction (ex falso quod libet).
So, to put it simply, all formulae in LA have two parts and are written arg : formula, where formula is an object-level proposition in minimal logic, and arg is a meta-level expression which describes how formula is justified, by providing an abstract representation of its proof in minimal logic.
LA is, formally, a close relative of FORA. Chapter 6 in particular shows how FORA argument structures can be viewed as abstractions of proof-steps in an object level logic.
There are two principal differences. Firstly, in this thesis, the meta-level argument representations are more abstract than those used in LA and are not tied to any particular object-level language or proof system. The main claim of this thesis is that this is a useful feature of the work described here and that the greater level of abstraction enables a much higher degree of flexibility in manipulating and using arguments in various ways. In other words, I argue that it is useful to totally decouple the meta-level representation from its object-level counterpart, in order to achieve independence from any particular object-level representation language or inference mechanism.
The second difference between FORA and LA is that the purpose behind the development of LA is to enable arguments to be used as a qualitative way of reasoning about uncertainty. The reason for articulating arguments for and against a proposition is to enable decisions to be made as to whether it can be confidently concluded or should be rejected. The arguments are used to provide qualitative uncertainty measures [Elvang-Gøransson et al 93], [Krause et al 94], which enable apparant conflicts to be resolved. By contrast, the research involved in developing FORA has not been particularly concerned with issues of uncertainty, rather the principal purpose behind its development has been to provide an account of reasoning about conflict, and disagreement, and allowing exploration of multiple points of view. A central tenet is that no attempt is made to resolve conflicts nor ‘weigh up’ the arguments for competing opinions.
Recently the ICRF have promoted arguments to a more significant level, describing them in [Krause et al 95b] as first class objects which are themselves reasoned about, at the meta-level. The StAR program produces reports of the level of risk of chemical substances, by generating arguments for and against them being dangerous carcinogens. It does this using a rule-based system based on an expert system for toxicity prediction, called DEREK, which reasons about the presence in the chemical substance of molecular structures known to indicate risks. These predictions are qualified by also including information about how the substance will be administered and other factors about the patient, which may reduce or increase the level of risk. For example, a substance administered orally could be metabolised into a more dangerous substance, or alternatively it may move swiftly through the body and be excreted before causing dangerous side-effects. Thus StAR can combine arguments for carcinogenity based on chemical analysis, with other arguments or counter-arguments based on an understanding of human physiology. In order to produce useful risk reports, they address ways in which such collections of arguments can be evaluated and presented to a user. Their intention is ‘that the risk characterisation should be transparent to the recipient, and he or she acts as final arbiter’. For this to be feasible the arguments presented to the user need to be both comprehensive and clear. The linguistic uncertainty descriptors mentioned earlier [Elvang-Gøransson et al 93] are used to support this.
This line of research indicates a real need for a more abstract approach to reasoning about arguments and independence from any one particular object-level representation language.
2.2.2.3 AI implementations of Toulmin’s practical reasoning
[Freeman & Farley 92] describes a formal theory of argumentation, closely based on Toulmin’s claim-warrant-backing structures, which they have used to implement an argumentation model [Freeman & Farley 93]. They draw the usual distinction between an argument as a structured entity, for explaining the grounds for a claim, and an argument as a dialectical process between two agents who disagree. Like the ICRF, they are largely concerned with representing and reasoning about the uncertainty of claims disputed in the argument process.
They produce a taxonomy of possible qualifiers (in Toulmin’s sense) which are closely related to the qualitative uncertainty factors derived by the ICRF. Their argument structures are based on Toulmin Argument Units (taus) which is their name for the argument representations described in section 2.2.1.2. Their contribution is to provide a recursive algorithm for the exhaustive generation of arguments for and against a claim. Further algorithms characterise a dialectic argumentation process as a series of moves, achieved in a game-theoretic turn-taking manner.
Unlike the ICRF, but like much other AI work on argumentation, however, they do not address the issue of how to achieve the basic representation of argument steps which is required for these algorithms to be used. Formalisation of the argument steps is taken as the given from which they proceed. Their argument process is also strictly two-sided. Hence as an implementation of Toulmin’s basic theory this is useful research and successfully brings Toulmin into the AI arena. However, it does not yet address the question of how to use argumentation to support the construction of knowledge based systems in controversial domains. They also do not address any examples of significant complexity, limiting themselves to standard (toy) examples such as Toulmin’s ‘British citizenship’ example, Poole’s ‘Republican-quaker-hawk-dove’ example, the standard default reasoning (‘Penguins-birds-flying’) example and Pearl’s ‘Rain-sprinkler-wet-grass’ causal reasoning example.
Another implementation of Toulmin-style argumentation is [Bench-Capon et al 91], which produces explanations of the reasoning carried out by logic programs using Toulmin’s argument structures as the template for explanations. In this system the object-level reasoning is carried out by a logic program interpreter, which requires that the logic program’s predicates are annotated with information describing which part of a Toulmin-style argument it represents (datum, qualifier etc). This, naturally, facilitates the explanation of the proof. The description in chapter 6 of how object-level proofs can be parsed to provide meta-level arguments describing the reasoning carried out in the proof, is a direct contribution to the problem tackled by Bench-Capon, and provides a more elegant method for carrying out this abstraction step without requiring modification of the object-level representation.
2.2.2.4 AI applied to legal argumentation
Given the importance of rigorous argumentation in courts of law, and the necessity of dealing with two inconsistent points of view (the prosecutor and the defendant) in the adversarial legal practice common to many countries, it comes as no surprise that the legal domain is a rich source of interesting argumentation-based AI applications. One of the most famous is the representation of the British Nationality Act as a logic program [Kowalski and Sergot 90], the purpose of which is to automatically construct an argument for or against a candidate’s eligibility for citizenship.
[Sartor 93] provides techniques whereby a logic program can assess arguments for and against a legal decision and, by providing orderings of the premises used by both sides, resolve the conflict between them. This work is particularly interesting from the point of view of this thesis as it provides rigorous formal definitions of the concepts of argument and counter-argument which can be stated using FORA’s language (see section 5.3). The argument-ordering devised by Sartor is similar to the qualitative uncertainty measures developed in [Elvang-Gøransson et al 93].
Another legal AI application which involves argumentation is [Yang et al 9?]. This work applies case-based reasoning techniques to the problem of formalising Scottish building regulations. Planning permission for buildings involves a combination of building rules (which are deliberately open ended) and precedent, ie: reference to previous buildings which have been granted planning permission which are deemed similar to the case in question. These cases allow the rather vague rules to be interpreted in particular cases, by the construction of IBIS-like argument structures (see section 2.2.3.2) about the similarity or dissimilarity of the current case to previous cases, and the relevance of various aspects of the regulations.
[Loui et al 93] is another example of work which seeks to allow legal reasoning to integrate both arguments based on general policies or rationales, and arguments based on particular precedents. These are combined in a version of defeasible logic in order to provide syntactic methods for determining which of several arguments is the strongest. An interesting avenue this research has taken is to explore, in cases where there is no obvious winning argument, ways of nonetheless drawing conclusions. One such is the use of notional ‘resources’ which can be applied to arguments, to limit the time spent on arguing for various options. [Loui 92] describes strategies for this, such as allocating resources to generating counter-arguments when a strong argument exists, so that after a finite amount of resource has been used up, it can be concluded that a position is upheld despite a maximal attempt to undermine it. A similar approach to providing heuristics for allocating resources to arguing is suggested in [Sillince 94]. This idea is reflected in the ‘arguer’ program described in chapter 4 of this thesis, which given a strong argument for one position, suggests ways in which a user of the program can generate counter-arguments to it. Again, though, this thesis differs from Loui’s work in approach by not aiming for computer-based conflict resolution, merely exploration, and by not being tied to a particular object-level logic.


2.2.2.5 AI applied to environmental argumentation
The environmental domain is only recently gaining the interest it deserves from AI researchers and relatively few examples exist which attempt to use AI techniques to represent environmental or ecological arguments. One is [Robertson et al 91] in which ecological models are treated as arguments for particular patterns of relations occurring in an ecosystem. These arguments are represented as logic programs, which can be run as simulations. Some recent research [Robertson and Goldsborough 94] has taken a more direct approach to environmental argumentation, representing as logic programs the arguments for and against the selection of sites for wildlife reserves. Again this is restricted to a particular object-level representation.
A much more informal approach to capturing the structure of agricultural extension documents has been developed in [Beck & Watson 92], but this has little to say about argument structures, being concerned more with the ‘molecular structure’ of sentences than their ‘ecological’ relationships with other statements.
[Haggith 95b] discusses the use of FORA for representing various arguments for and against the granting of oil production leases in the environmentally sensitive coastal zone off Alaska.

2.2.3 Less Formal Approaches to Argumentation
2.2.3.1 Linguistic Perspectives on Argumentation
Text Analysis
Linguists are interested in argumentation because a pattern of argument is often reflected in the structure of a piece of text. There is thus a large body of research which assesses how such structures affect the coherence of text. One particularly influential example is the Rhetorical Structure Theory (RST) of [Mann & Thompson 87a], [Mann et al 89]. This theory provides a set of structures which can be used to analyse (or ‘mark-up’) text. These structures are binary relations between pieces of text, or sets of pieces of text, so for example, one piece of text might be linked to another which provides background information. The first would be called the ‘nucleus’, the second would be called the ‘satellite’ and the link between them would be called ‘background’.
Although the idea of marking up textual structures to reveal the form of the underlying argument is interesting, and RST has inspired considerable research (particularly in the computational linguistics area, eg: [Knott & Dale 93], [Daradoumis 93]), it has some significant problems. In particular, it is not suitable as a representation framework for handling arguments from conflicting points of view, for the following reasons.
Firstly, in the various papers on RST a large set of binary relations is proposed. These relations are not given any formal interpretation, and it is assumed that people will use them in a uniform fashion. There is a laissez-faire attitude toward the generation of new relations, and as a result there is a proliferation of relations with no clear statement of their intended meaning. This problem is linked to a lack of indication of how Mann et al intend to use their theory, other than as a way of organising linguistic analysis and spotting when texts do not ‘hang together’. There is no indication in particular that their theory could be used for automated analysis, and as a result their definitions are semantically unclear. This makes it difficult to compare their structures with alternative ways of representing arguments, for example, the method described in [Fisher 88].
The second problem is that this is a theory about ‘rhetoric’, which they claim is comprehensive enough to be used in the complete analysis of over 400 sample texts, but the assumption here is that these texts are monologues. The focus of this thesis is to support analysis of debate and thus the central need is for analysis tools which are not restricted to a single writer or voice. Some of their requirements for text coherence are thus inappropriately restrictive. Having said that, their relation definitions frequently refer to the reader of the text (in addition to the writer), and they are explicit in their theory of text being part of a theory of communication. The problem is that their theory restricts this communication to being one-directional, so there is total asymmetry between reader and writer. This is clearly a big problem, particularly given the central role of disagreement in argumentation. There is no way their theory can be expected to handle this notion effectively within the constrained view of ‘rhetoric’. Even their notion of ‘antithesis’ (normally considered an aspect of dialectic) is treated in [Mann and Thompson 87b] as a style of monologue in which the writer attempts to counter a possible objection to their main claim.
More recently there has been some research done into extending RST to handle dialogues, for example, [Daradoumis 93a&b] and [Fawcett & Davies 92], but though these tackle the limitation of a ‘single voice’ in RST, the problem of informality remains.
A somewhat more formal approach to the representation of argument structures in rhetorical texts (newspaper editorials) is the OpEd system of [Alvarado 89] which is based on the notion of Argument Units representing patterns of support and attack relationships between beliefs. However, the underlying belief relationships suffer from the same proliferation of link-types as exemplified by RST.

Download 1.37 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   17




The database is protected by copyright ©essaydocs.org 2023
send message

    Main page