Local Holism


Local Holism and the meaning of the words



Download 87.24 Kb.
Page8/11
Date21.05.2021
Size87.24 Kb.
1   2   3   4   5   6   7   8   9   10   11
4 Local Holism and the meaning of the words

After the different solutions of the problem of holism, with their limitations, one last alternative is still to be discussed, a solution which seems lurking behind the Multi Context theories: "local" holism, a label which fits also some of Wittgenstein's remarks. While there is a general tendency to criticize global holism, local holism is nowadays considered a viable option (see for instance Peacocke 1997). However the label is still vague, and many interpretations of what "local holism" is are available. A first basic definition of local holism should give restrictions to the meaning of linguistic expressions relative to specific contexts in which they are used. A further specification might claim that the meaning of a word or a sentence depends on a "local" theory, and that - given no difference in principle between analytic and synthetic sentences - it depends on all the possible beliefs or inferences enclosed in that theory.

There is an ambiguity here. (Cognitive) contexts may be used to represent the sets of beliefs of individuals. Given that each person has her personal touch or that semantic twins do no exist in principle (at least when both say "I" they refer to different persons), there will be no possibility to have two cognitive contexts identical with one another. Representing cognitive contexts as sets of beliefs of individuals may have some advantage to solve some problems (differences of points of view, and so on). However contexts may also represent collective activity or shared information and action, may be given as representations of what is going on in a typical situation (as frames or scripts). In this case it is not always necessary to represent the point of view of the individual, but the point of view of the "social setting", the result of the interactions among individuals. Think for instance of a "restaurant" script and think of the concept "dish"; I don't care if the waiter believes that her aunt has a set of dishes with golden stripes. I care that the waiter beliefs that dishes are to be used for food, and that they are fragile. Are we going back to some form of atomism inside each contexts? Actually this is not the case. I am not suggesting to define each concept in isolation; however I think it important not to forget the basic core of the "typicality" or "default" reasoning, applied to meaning.

There is a solid core which is represented by the typical situations in which, as Wittgenstein says, "a word is at home", or in which a concept is typically used. The great lesson of frame nets is that we don't need a strict definition or strict (necessary & sufficient) meaning postulates, but just defeasible ones. The point is that if we abandon the principles behind the analytic-synthetic distinction, we may still find some pragmatic principles which justify the distinction between two sorts of inferences, the one defining basic uses of words, the other defining occasional applications of them. A philosophical attitude of this kind is well followed in the practice of A.I.: any "viable" (=which can be implemented in a system) representation of meaning as inferential role is bound not to include all (or most) possible inferences. Many A.I. programs may be considered as attempts to respect this restriction in defining meaning and understanding. There are many examples since the beginning of a.i.: McCarty ("advice talker") defines the concept of "immediate inference": in order to understand a situation we do not need to make explicit all the inferences from the relevant premises, but only their immediate consequences, beginning with the inferences which require just one step in the deductive process. The idea is barely sketched; however, it does point to the necessity of controlling the risk of combinatorial explosion of inferences; Norvig 1989 designed an algorithm computing a limited set of proper inferences quickly, without computing all types of inferences. Proper inferences are defined as plausible, relevant, and easy. Quick computation of a small set of proper inferences yields a partial interpretation, which can be used as input for further processing. In many frame systems we find a distinction between assertional and definitional part. The definitional part is not linked to the analytic-synthetic definition, but it is prompted by the necessity of the system to run properly. We may use the idea of frame (which can be represented as a set of default inferences) as a substitute of the idea of "basic" or "literal meaning".

On Gricean lines many authors suggest that the literal meaning is to be computed before any possible and possibly deviant interpretation. Grice's hypothesis has been criticized on the ground of psychological plausibility and other experimental results (see Gibbs 1993) which in principle make it dubious to speak of a unique "literal" meaning of a word. Reinforced also from the fact that each word and each concept may have many different uses and interpretations in different contexts, the concept of basic or literal meaning has been therefore challenged. The extreme alternative choice is to say that the meaning of a word is always relative to the specific context in which it is uttered, and therefore we have to decide the context before deciding which meaning is at stake. Searle 1979 has been one of the first authors to insist of the underdetermination of meaning in respect to context (or backgound), assuming that literal meaning does not exist, but only contextual determination permits to avoid ambiguity. Other author (see Bianchi 1999) has developed to the extreme this radical contextualism tending to abolish the divide between typicality and variation. This point is really hard to swallow (see appendix 1). Studies on typicality suggest that there is a solid core in meaning: basic inferential competence is construed in typical situations together with basic referential competence.

A solid core of procedures lies behind the possibility of variations of the uses of words in contexts. This procedural aspect of meaning together with the idea of typicality or default values is also the heritage of the first period of artificial intelligence, with the procedural paradigm of toy words and the development of frame systems with default values. We may point out that even in early toy words menings as procedures were assured compositionality (see Penco 1999). Criticisms on absence of compositionality in prototype theories (see Fodor 1998) do not exrtend to the procedural paradigm of meaning. Granting contextual restrictions to compositionality, we may accept a deveopment of the concept of meaning as procedure in both aspects pertaining to inferential and referential competence (see Marconi 1997, who tries to avoid speaking of "meaning", and tries to explain the dual aspect of lexical competence). This solid core of procedural meaning can be an explanation of the relative invariance of the conceptual lexicon (invariance also among languages and cultures) which is a datum we cannot avoid, and which can be evidences supporting it in many cases as, for instance:

- the slots to be filled in a frame for most of typical situations and the referential competence connected with basic ontology.

- the linguistic rules associated with indexicals and with reports of indirect speech [If I report her speech, I will say "she…"; if you report my speech, you will say "he…", and so on].

- the coherence of sets of words whose meaning is inter-definable or co-defined as a system like logical constants or words for colors.

A good test to choose between accepting the relative autonomy of the conceptual lexicon or following local holism in a "radical" contextualist way is comparing two expressions belonging to two different contexts. Are we allowed to speak of identity of meaning through contexts? Here identity of meaning may signify identity of inferences correlated to the expressions. We have an alternative:

(a) we decide to have some stable contexts, or definitional contexts, where an expression takes its most basic meaning, given by typical inferences and recognitional abilities. Clark 1992 speaks of "introductory scripts", referring to contexts appropriate to introduce a new linguistic expression. These definitional contexts should be able to be lifted in different contexts in which the word is used. We might therefore speak of sameness of meaning when a word used in two different contexts belongs to a definitional context which has been lifted in the two contexts without much change on the defauls values.

(b) we decide to allow only compatibility relations between expressions. We might say that two expressions are compatible if they may be intersubstitutable in the same contexts. But two expressions, even of the same type, can never have the same meaning if they are used in different contexts; in different context they will produce different inferences, and their inferential power or meaning will be different.

Following (a) ensures a safe way towards a theory of meaning which uses the concept of "basic meaning" of an expression to be a stereotypical representation which can be shared among contexts. The measure in which the values of the frame are shared gives the measure of similarity of meaning between the two concepts. The lexicon of a language might partly depend on the definitions in a "definitory" vocabulary, which should take into account partitions and levels of knowledge. We still maintain the relevance of cognitive contexts to decide the final interpretation of the meaning, but we still keep some starting points. On the other hand, if we follow (b), we may hold a point of view of radical local holism and we have either to give up speaking of meaning, or to define meaning each time, relatively to the context in which the expression is used. In this case it is not possible to define identity of meaning, unless we have identity of context. We might have no definitory vocabulary (or vocabularies), but just a list of expressions to be interpreted each time in each context.

Following a radical form of local holism we run the risk of missing something deeply embedded in our linguistic practice: the comparison of meanings, or of our sets of inferences and beliefs. The ability to say "you do not understand what I mean" or "you have got exactly the meaning of my words" relies on our social practice of converging on stable sets of typical inferences; their defeasable character is one of the peculiarity we have discovered in the past century. Defeasibility means also possibility to shift context, but also ability to recognize tipicality and normativity.

A misunderstanding seems to run through the discussion on cognitive contexts and it is the not always clear distinction between kinds of context (definitory contexts, working contexts, procedural contexts, belief contexts,…). When we treat contexts as sets of beliefs of individual agents, particular inferences drawn in each individual context will be unique for that context. Therefore we cannot think that individuals may share conceptual contents intended as exactly the same sets of inferences; as I said before no two persons could share that. But we all at least share basic rules to navigate across contexts; one of the most used is to pick up (import, lift) definitional contexts in different dialogues and actions. We may then change what is normally assumed for granted, but the change comes after a prima facie assumption of typicality. A new field of study is open to us: the study of the interaction between rules "external" to contexts, and what is defined "inside" contexts.

A working hypothesis which is emerging from different fields fo research is that at least part of the definition of our conceptual machinery derives from the convergence of high level rules among contexts, convergence which represents the agreement of individuals regarding basic information and action. The overlapping of uses and the relative stability of definitional contexts is anchored to the common acceptance of relations of compatibily among contexts. The difference between an extreme and a weaker form of local holism depends on the wheight given to this hypotesis: an extreme and radical form of local holism would suggest to abandon the concept of conceptual content and to recontruct it completey from these practices of converging (Brandom 1994 seems to go in this direction); a weaker form would insist on the relative persistence of conceptual structures which emerge in typical situations and are basic for our language learning and understanding. I have give some very programmatic remarks about the danger of following the radical form of local holism; this does not mean that this project deserves careful attention; any serach to put it to work may certainly reveal new dimensions of the contextual dependence of meaning and understanding. Local holism is however not necessarily so radical as that.







Share with your friends:
1   2   3   4   5   6   7   8   9   10   11




The database is protected by copyright ©essaydocs.org 2020
send message

    Main page