A meta-level argumentation framework for representing and reasoning about disagreement Mandy Haggith Phd thesis The University of Edinburgh



Download 1.37 Mb.
Page9/17
Date31.05.2016
Size1.37 Mb.
#74379
1   ...   5   6   7   8   9   10   11   12   ...   17

4.3.4 Conclusions
The main conclusion drawn from the experiments was that people do disagree with each other and can provide complex arguments for their points of view and that it is possible to record some of these by means of simple interview and transcription processes. It is less useful to ask an informant to represent their knowledge directly in predicate logic (and this causes real problems with informants without logical training). Even with logically trained informants, the detail of how to represent the content of statements as predicate logic formulae can obscure the overall structure of their argument. The level at which argument structure can be usefully represented thus seems to be more abstract than FOPL.
Here are some guidelines for acquiring knowledge about controversial issues.
1. Provoke controversy by carefully choosing questions which encourage speculation.
2. Do not influence the informants to say things which they might not otherwise have said - it is particularly important to avoid asking leading questions which appear to have a ‘correct’ answer.
3. Encourage the informants not merely to state their opinion, but also to justify and explain it, and record those justifications as completely as possible. ‘Why?’ is a useful question to ask.
4. Record what informants have said as accurately and completely as possible, preferably using full transcription as this provides a resource that can be returned to later to trace the pattern of the informant’s argument.
5. Do not introduce unnecessary formal tools which may also give an erroneous impression of there being a ‘right’ way for the informant to contribute their views.

4.4 Mark-up using hypertext
A tool for helping in construction of FORA knowledge bases has been implemented in HyperCard, a hypertext tool for Macintosh computers. This tool supports the segmentation and encoding phases of transcript analysis, as described in the previous section. In other words it lets a user take a piece of text, select propositions from within it (segmentation) and state how they relate to other propositions in terms of the structure of the argument (encoding).
HyperCard works on the basis of small units of information called ‘cards’ (by analogy with the catalogue card systems in libraries) which are grouped into ‘stacks’ with similar structure or information content. Each card can include fields for text, buttons and graphics. Clicking the mouse-button over any of these items can cause something to happen, if the object clicked on has a program (called a ‘script’) associated with it.
The FORA mark-up tool consists of three stacks :
1. Mark-up, which contains the text to be segmented.
2. Propositions, which contains the segments of text, records the links between propositions or sets, and outputs the knowledge base in Prolog syntax.
3. Sets, which contains sets of propositions.
These are now described in turn, and followed by illustrative figures.
Stack 1. Mark-up :
This is the entry point and it consists of a stack of cards which contain the text to be segmented (See Figure 4.2). Any text file can be selected and loaded into the mark-up stack, by clicking the ‘read from file’ button. Several files of text can be included in the stack, and each card can have unique source information recorded in the top right hand text field. Long pieces of text will be spread out over several cards - arrows indicate how to ‘turn the page’ backwards and forwards through the text.
The text has two modes - read or segment. In read mode, clicking the mouse on text produces normal hypertext behaviour (ie: if you click on part of a marked-up argument, you move to where it is represented). In segment mode, clicking the proposition button, the user can select a piece of text with the mouse and make it into a piece of hypertext. The selected text appears in an editable box so that the user can alter it (if the text appearing in the flow of a passage does not make complete sense standing alone, for example because of anaphora such as ‘it’ which need to be replaced by noun phrases). This edited text becomes a proposition in FORA and is recorded along with its source information in the propositions stack, ready for linking with other propositions.
Stack 2. Propositions :
The propositions stack is where most of the important information and functionality of the tool is based. It contains the propositions (segments of text) selected by the user, and enables linking between them. Each card in the stack (see Figure 4.3) contains a single proposition and a record of all the links from it. These links are to either other propositions or sets. The link types are the four binary relations in the FORA language - disagree and equivalent (linking proposition to proposition), and justification and elaboration (linking proposition to set).
The user can look at the current relations by clicking on a relation button and the relations of that type from the current proposition appear in the big text field.
The user can also create a new relation linking from the current proposition by clicking the new relation button and the appropriate relation-type button. A message is then presented asking the user to move (by hypertext links, arrows etc) to the proposition or set they wish to link to, and then hit the ‘enter’ key which selects that point as the target of the link. They are then returned to the starting proposition and the new relation is recorded on its card. In the case of justifications and elaborations, the target is a set, so the user moves to the set stack and selects an existing set. They may need to create a new set, which they must do before being able to make the link. The details of how this is done are given in the section on the sets stack.
Finally, from the propositions stack the complete FORA knowledge base, consisting of propositions with sources and relations, can be generated in Prolog syntax and saved to a file readable by Prolog to be used by all the other tools which make up FORA. This is done by clicking the ‘Dump Code’ button and giving a file name for the knowledge base.
Stack 3. Sets :
The sets stack cards each contain a set of propositions, and enable the user to carry out the usual set operations, namely union, intersection, addition of an element, deletion of an element, copying the set and deleting the whole set. This is done by clicking the appropriate button on the card (see Figure 4.4). Addition, unions and intersections are carried out in the same way as linking of propositions. The user clicks a button on the starting point, say they click ‘add proposition’ on a set containing one proposition. They are given a message to move to the chosen proposition (by hypertext links, arrows etc) and then hit the ‘enter’ key which selects that point as the target of the link. They are then returned to the set which has the new proposition added to it. A new set can be created by taking the union or intersection of other sets or by selecting a proposition in the propositions stack and making it into a new set by clicking the ‘new set’ button.
Constraints are imposed on changes and deletions. A set which is the target of a relation cannot be changed or deleted. A proposition which appears in a set or is related to or from another cannot be deleted. So to delete a proposition, first all relations to and from it must be deleted, then any sets in which it occurs must be deleted and only then can it be deleted itself.

Figure 4.2: Mark-up stack

The text here is part of an interview transcript in which an informant (who chose



the name Captain James T. Kirk) gave their views on the greenhouse effect.

Figure 4.3: Propositions Stack

Figure 4.4 : Sets Stack

Figure 4.5 : Mark-up of the aflatoxin debate - illustrating segmentation
4.5 Example : Representing the aflatoxin debate in FORA.
To illustrate the construction of a knowledge base in FORA containing the conflicting views about aflatoxins, the text in section 3.3.1 can serve in place of a transcript and be loaded into the HyperCard mark-up tool. This enables segmentation of text, and mark-up or linking of segments, to be illustrated and also allows a return to the thread of the running example. The text appears on cards, as illustrated in figure 4.5, and propositions and the arguments for them were marked up using the hypertext tool. The result, after using the tool and finally choosing the ‘dump code’ button to produce the FORA knowledge base in Prolog-readable syntax, is as follows.
FORA knowledge base
After segmentation the propositions are as follows :
proposition('The FDA policy level for aflatoxins should be 20ppb', pro).

proposition('The maximum acceptable level of aflatoxins is 20ppb', pro).

proposition('Aflatoxins cause cancer in humans', pro).

proposition('There is no safe level of aflatoxins', pro).

proposition('The minimum detectable level of aflatoxins is 20ppb', journal).

proposition('Aflatoxins cause cancer in non-human animals', pro).

proposition('Aflatoxins are a kind of chemical', pro).

proposition('Animals are good indicators of the cancer risk of chemicals to humans', pro).

proposition('Aflatoxins cause cancer in non-human animals', pro).

proposition('Extrapolation of the cancer risk of aflatoxins from animals to humans is reliable',

pro).

proposition('The FDA policy level for aflatoxins should not be 20ppb', con).

proposition('20ppb is not the maximum acceptable level of aflatoxins', con).

proposition('The level of adverse effects of aflatoxins is 200ppb', con).

proposition('200ppb is much greater than 20ppb', con).

proposition('Extrapolation of cancer risk from one species to another is unreliable', con).

proposition('The effect of aflatoxins varies greatly between species', con).

proposition('There is a lack of scientific evidence showing no safe level of aflatoxins', con).

proposition('Extrapolation of cancer risk of aflatoxins from animals to humans is not reliable',

con).

proposition('There is a lack of scientific evidence showing aflatoxins cause cancer in humans',

con).

proposition('Aflatoxins cause cancer in animals', journal).

proposition('Aflatoxins cause liver toxicity in animals', journal).
The propositions are then linked to show they relate to each other in terms of the arguments. There are lots of justifications, on both sides of the debate.

justification('The FDA policy level for aflatoxins should not be 20ppb',

['20ppb is not the maximum acceptable level of aflatoxins']).

justification('20ppb is not the maximum acceptable level of aflatoxins',

['The minimum detectable level of aflatoxins is 20ppb',

'The level of adverse effects of aflatoxins is 200ppb', '

200ppb is much greater than 20ppb']).

justification('Extrapolation of cancer risk of aflatoxins from animals to humans is not reliable',

['Extrapolation of cancer risk from one species to another is unreliable']).

justification('Extrapolation of cancer risk of aflatoxins from animals to humans is not reliable',

['The effect of aflatoxins varies greatly between species']).

justification('The FDA policy level for aflatoxins should be 20ppb',

['The maximum acceptable level of aflatoxins is 20ppb',

'Aflatoxins cause cancer in humans']).

justification('The maximum acceptable level of aflatoxins is 20ppb',

['There is no safe level of aflatoxins',

‘The minimum detectable level of aflatoxins is 20ppb']).

justification('There is no safe level of aflatoxins',

['Aflatoxins cause cancer in humans']).

justification('Aflatoxins cause cancer in humans',

['Aflatoxins cause cancer in non-human animals',

'Extrapolation of the cancer risk of aflatoxins from animals to humans is reliable']).

justification('Extrapolation of the cancer risk of aflatoxins from animals to humans is reliable',

['Aflatoxins are a kind of chemical',

'Animals are good indicators of the cancer risk of chemicals to humans']).
The elaboration relation is used here to elaborate on the type of adverse effects caused by aflatoxins.
elaboration('The level of adverse effects of aflatoxins is 200ppb',

['Aflatoxins cause cancer in animals','Aflatoxins cause liver toxicity in animals']).
The following disagreements can be given.
disagree('The FDA policy level for aflatoxins should be 20ppb',

'The FDA policy level for aflatoxins should not be 20ppb').

disagree('The maximum acceptable level of aflatoxins is 20ppb',

'20ppb is not the maximum acceptable level of aflatoxins').

disagree('Extrapolation of the cancer risk of aflatoxins from animals to humans is reliable',

'Extrapolation of cancer risk of aflatoxins from animals to humans is not reliable').

disagree('There is no safe level of aflatoxins',

'There is a lack of scientific evidence showing no safe level of aflatoxins').

disagree('Aflatoxins cause cancer in humans',

'There is a lack of scientific evidence showing aflatoxins cause cancer in humans').
Finally, here is an example of two equivalent statements.
equivalent('Aflatoxins cause cancer in non-human animals',

'Aflatoxins cause cancer in animals').
Argument for the FDA policy
The definitions of arguments allow the construction of the following argument for the FDA policy.
The FDA policy level for aflatoxins should be 20ppb <=

The maximum acceptable level of aflatoxins is 20ppb <=

There is no safe level of aflatoxins <=

Aflatoxins cause cancer in humans <=

Aflatoxins cause cancer in non-human animals<= Assumption

Extrapolation of the cancer risk of aflatoxins from animals to humans is reliable<=

Aflatoxins are a kind of chemical<= Assumption

Animals are good indicators of the cancer risk of chemicals to humans

<= Assumption

The minimum detectable level of aflatoxins is 20ppb<= Assumption

Aflatoxins cause cancer in humans<=

Aflatoxins cause cancer in non-human animals<= Assumption

Extrapolation of the cancer risk of aflatoxins from animals to humans is reliable<=

Aflatoxins are a kind of chemical<= Assumption

Animals are good indicators of the cancer risk of chemicals to humans<= Assumption
The argument is laid out in this manner (with indentation to show the nesting more clearly than brackets can) by one of the programs described in the next chapter - the ‘arguer’. The internal representation is just as defined in section 4.1. This is a complete, strong argument (see the definitions in section 4.1). It is complete in the FORA sense, because none of its leaf nodes are extendible (from the knowledge base just given) - all the leaf nodes’ justifications are empty sets. These are indicated here as ‘Assumptions’. The argument is strong in the FORA sense because none of the steps in the argument are elaborations or equivalences.
There is a complete, hybrid argument against the FDA policy as it contains the elaboration, and the equivalence. Note that the use of an equivalence as a support in an argument causes a loop (because if P and Q are equivalent this produces the argument P <= {Q <= {P <=....}}). The hybrid argument against the FDA policy is as follows :
The FDA policy level for aflatoxins should not be 20ppb<=

20ppb is not the maximum acceptable level of aflatoxins<=

The minimum detectable level of aflatoxins is 20ppb<= Assumption

The level of adverse effects of aflatoxins is 200ppb<=

Aflatoxins cause cancer in animals<=

Aflatoxins cause cancer in non-human animals<=

Aflatoxins cause cancer in animals<= Loop

Aflatoxins cause liver toxicity in animals<= Assumption

200ppb is much greater than 20ppb<= Assumption


4.6 Summary
This chapter has introduced the meta-level framework which is used as the representation language within FORA. It is abstract and simple, focussing on argument structures defined in language-independent terms. It is intended to capture the form of arguments as represented in ordinary knowledge representation languages, such as rules, or logics of various types, and it is abstract enough not to presuppose any particular object-level representation language.
The second half of the chapter discussed how to construct knowledge bases using this language. It addressed the general issue of acquiring knowledge from multiple conflicting sources and summarised some experiments in knowledge acquisition. Then a hypertext tool for helping to mark-up texts into FORA’s argument representation language was described. Finally, the FORA language and mark-up tool were illustrated with the aflatoxins example.
A formal semantics (model theory) for this framework has not been included here. Chapter 5 provides an operational semantics for the language: it describe its meaning by defining how it is used. Chapters 6 and 7 describe how to map between FORA and first order predicate logic and so the reader who is interested in semantics will find there definitions of what kind of logical interpretations are possible for statements in the FORA language. These two chapters define correspondences between the meta-level relations, and object-level structures in particular languages (propositional logic and first order predicate logic respectively). FOPL was chosen as the object-level language for these examples in order that these mappings can be thought of as a possible semantics of FORA. The meaning of the relations in FORA is given by showing their relationship to proof structures in FOPL. However it is important to note that because arguments do not have the logical force of proofs, sometimes an argument may correspond to a set of propositions and inference steps which is not logically valid or sound, for example because assumptions do not hold, or because it uses invalid inference, such as abduction.
It would be possible to provide a formal model theory for the framework, and the logic of [Rescher and Brandom 79] would be a good starting point for this enterprise, providing semantics of the primitive relations in terms of superpositions of possible worlds. For the present, however, the main aim is to investigate the computational and practical properties of the framework, articulating its meaning in terms of its use. The next chapter therefore describes some tools for exploring and using FORA knowledge bases.

Chapter 5
Using FORA

This chapter describes some tools for using FORA knowledge bases. The chapter begins in section 5.1 with a discussion of who FORA is intended to be used by, and what kind of functionality FORA needs to provide for them. Section 5.2 describes the basic implementation of FORA, including the simplest FORA tool for exploring knowledge bases, called explorer. Section 5.3 is the core of this chapter, and contains formal definitions of a set of argumentation structures built from the primitives of the FORA language presented in the previous chapter. These structures are used by a tool called arguer, described in section 5.4.2, for exploring, modifying and extending arguments in a knowledge base. This section includes an illustration of how the argumentation structures are used to explore and extend the aflatoxin debate. Section 5.4.3 gives a brief description of a ‘Devil’s Advocate’ implemented by an MSc student as an argumentation tutor based on the FORA argumentation structures. This chapter is summarised in section 5.5.


5.1 FORA users and their requirements
My thesis is that formal and abstract representations of arguments are useful. The FORA language presented in the previous chapter enables formal, abstract representation of arguments, and so my task is now to argue that such representations are useful. The first question to answer is ‘useful for whom?’ In this section I will describe the intended users of FORA and by doing so provide some motivation for the suite of tools which have been implemented in the FORA system.
5.1.1 Users
The primary intended users of FORA are knowledge engineers whose task is to construct, revise and maintain knowledge based systems representing some area of expertise. In particular, my aim has been to support knowledge engineering in difficult domains, where by difficult I mean controversial, rapidly changing, partially understood, or fragmented.
The domain of toxicological risk assessment is difficult in all these senses. It is controversial, because there are many different vested interests with conflicting goals who have a stake in risk assessment decisions (for example, food producers who want to minimise the amounts of their produce which must be destroyed due to contamination by high-risk chemicals, versus medical experts who want to minimise any extra risks of cancer). It is rapidly changing as research is continuously providing new evidence about the risks of chemicals, analytical chemistry is continually improving detection techniques, and the role and distributions of risky chemicals change from day to day. It is only partially understood : there are many areas of toxicology (for example, the precise mechanisms by which chemical substances cause cancer) in which understanding is patchy. It is also fragmented, as assessment of risks involves consideration of many different issues, from chemical transport mechanisms, human exposure patterns, epidemiological evidence, mechanisms of pathology, consumption, metabolism and excretion, evidence from animal experiments, testing of food products, agricultural methods of using chemicals, and so on. No one person can be expert in all the areas which need to be taken into account, and so risk assessment involves integrating knowledge from different specialisms. This is a profoundly difficult task - and it involves, for example, combining very low levels of understanding of levels of actual exposure to aflatoxins through food consumption, with a good understanding of how we metabolise some of the aflatoxin compounds, with a poor understanding of the mechanisms by which some aflatoxins cause disease.
In difficult domains such as these, knowledge engineers need to build knowledge bases with an eye on the future. The knowledge bases are likely to need to be updated and revised in the light of new evidence. Some of this evidence may lower uncertainty levels, but we cannot assume that this is the case. New evidence (like the combination of existing evidence) may well conflict with the contents of current knowledge bases, and so require significant re-engineering of knowledge bases. In addition, knowledge based systems have generally been constructed to support reasoning in a highly specific corner of expertise, but in difficult domains, conclusions from such corners need to be combined with evidence from other corners, or with currently useful rules of thumb from areas which are ‘wide open’ and where research is only just beginning.

Download 1.37 Mb.

Share with your friends:
1   ...   5   6   7   8   9   10   11   12   ...   17




The database is protected by copyright ©essaydocs.org 2023
send message

    Main page