Instrumentalism, Semi-Realism and Causality Comparing Dennett´s view of the function of folk psychology with the Theory-Theory


Do Chess Computers have Intentional States?



Download 50 Kb.
Page6/8
Date20.01.2021
Size50 Kb.
1   2   3   4   5   6   7   8

Do Chess Computers have Intentional States?

Dennett likes one sort of intentional systems in particular: chess computers. He states that we often treat them as intentional systems, e.g. the computer believes his rook is in serious danger and makes a retreating move. In Dennett`s terms:


Deep Blue, like many other computers equipped with AI programs, is what I call an intentional system: its behaviour (keep with English!) is predictable and explainable by attributing to it beliefs and desires -- "cognitive states" and "motivational states"--and the rationality required to figure out what it ought to do in the light of those beliefs and desires.
At the same time, Dennett is convinced that there are no real internal workings which correspond to these cognitive states. But are there really no corresponding physical states? Deep Blue is not only calculating moves to a certain depth, but also evaluating the possible moves (but the evaluation is also calculated). And a rook being in danger of being captured is definitely a part of the evaluation process. There is a functionally discrete state in Deep Blue, which can be interpreted as “move back the rook, as my evaluation matrix calculates many minus points (or is in danger)“. And this state is causally very active.

Dennett will respond in saying that even if there is a functionally discrete, causally active state, it is not semantically interpretable. The issue at hand boils down to the Chinese Room thought-experiment by John Searle. He has argued that strong Artificial Intelligence cannot fulfil its task4, as computers are syntactically structured and are not able to produce semantically interpretable (where is the noun?) intentionality. Dennet himself attacked Searles interpretation of the Chinese Room argument, and the Churchlands have mounted a serious attack using connectionist networks.5


The point I want to make is this: Even if we cannot correspond (not clear) any intentional state we attribute to Deep Blue to a functional state, the conditional argument still holds. If strong AI succeeds, then Dennett´s semi-realistic view is incorrect, as then there are intentional states which have causal power.
If this happens, then the only way to secure Dennett´s semi-realism is to claim that at least some intentional states do not need to correspond to physical or functional states.6 And with this weaker assumption Dennett might be right, but then his theory is not as forceful as before.



Share with your friends:
1   2   3   4   5   6   7   8




The database is protected by copyright ©essaydocs.org 2020
send message

    Main page