An interview on minimalism



Download 189.63 Kb.
Page1/4
Date conversion25.05.2016
Size189.63 Kb.
  1   2   3   4
AN INTERVIEW ON MINIMALISM
Noam Chomsky,

with Adriana Belletti and Luigi Rizzi


University of Siena, Nov 8-9, 1999 (rev: March 16, 2000)
I. The roots of the Minimalist Program.
AB & LR: To start from a personal note, let us take the Pisa Lectures as a point of departure [1].

You have often characterized the approach that emerged from your Pisa seminars, 20 years ago, as a major change of direction in the history of our field. How would you characterize that shift today?


NC: Well, I don’ t think it was clear at once, but in retrospect there was a period, of maybe 20 years preceding that, in which there had been an attempt to come to terms with a kind of a paradox that emerged as soon as the first efforts were made to study the structure of language very seriously, with more or less rigorous rules, an effort to give a precise account for the infinite range of structures of language. The paradox was that in order to give an accurate descriptive account it seemed necessary to have a huge proliferation of rule systems of a great variety, different rules for different grammatical constructions. For instance, relative clauses look different from interrogative clauses and the VP in Hungarian is different from the NP and they are all different from English; so the system exploded in complexity. On the other hand, at the same time, for the first time really, an effort was made to deal with what has come to be called later the logical problem of language acquisition. Plainly, children acquiring this knowledge do not have that much data. In fact you can estimate the amount of data they have quite closely, and it’s very limited; still, somehow children are reaching these states of knowledge which have apparently great complexity, and differentiation and diversity ….and that can’t be. Each child is capable of acquiring any such state; children are not specially designed for one or the other, so it must be that the basic structure of language is essentially uniform and is coming from inside, not from outside. But in that case it appears to be inconsistent with the observed diversity and proliferation, so there is kind of a contradiction, or at least a tension, a strong tension between the effort to give a descriptively adequate account and to account for the acquisition of the system, what has been called explanatory adequacy.

Already in the nineteen fifties it was clear that there was a problem and there were many efforts to deal with it; the obvious way was to try to show that the diversity of rules is superficial, that you can find very general principles that all rules adhere to, and if you abstract those principles from the rules and attribute them to the genetic endowment of the child then the systems that remain look much simpler. That’s the research strategy. That was begun around the nineteen sixties when various conditions on rules were discovered; the idea is that if you can factor the rules into the universal conditions and the residue, then the residue is simpler and the child only has to acquire the residue. That went on for a long time with efforts to reduce the variety and complexity of phrase structure grammars, of transformational grammars and so on in this manner [2]. So for example X-bar theory was an attempt to show that phrase structure systems don’t have the variety and complexity they appear to have because there is some general framework that they all fit into and that you only have to change some features of that general system to get the particular ones.

What happened at Pisa is that somehow all this work came together for the first time in the seminars, and a method arose for sort of cutting the Gordian knot completely: namely eliminate rules and eliminate constructions altogether. So you don’t have complex rules for complex constructions because there aren’t any rules and there aren’t any constructions. There is no such thing as the VP in Japanese or the relative clause in Hungarian. Rather there are just extremely general principles like “move anything anywhere” under fixed conditions that were proposed, and then there are options that have to be fixed, parametric choices: so the head of the construction first or last , null subject or not a null subject, and so on. Within this framework of fixed principles and options to be selected, the rules and the constructions disappear, they become artifacts.

There had been indications that there was something wrong with the whole notion of rule systems and constructions. For example there was a long debate in the early years about constructions like, say, “John is expected to be intelligent”: is it a passive construction like “John was seen”, or is it a raising construction like “John seems to be intelligent”? And it had to be one or the other because everything was a construction, but in fact they seemed to be the same thing. It was the kind of controversy where you know you are talking about the wrong thing because it doesn’t seem to matter what you decide. Well, the right answer is that there aren’t any constructions anyway, no passive, no raising: there is just the option of dislocating something somewhere else under certain conditions, and in certain cases it gives you what is traditionally called the passive and in other cases it gives you a question and so on, but the grammatical constructions are left as artifacts. In a sense they are real; it is not that there are no relative clauses, but they are a kind of taxonomic artifact. They are like “terrestrial mammal” or something like that. “Terrestrial mammal” is a category, but is not a biological category. It’s the interaction of several things and that seems to be what the traditional constructions are like, VP’s, relative clauses, and so on.



The whole history of the subject, for thousands of years, had been a history of rules and constructions, and transformational grammar in the early days, generative grammar, just took that over. So the early generative grammar had a very traditional flair. There is a section on the Passive in German, and another section on the VP in Japanese, and so on: it essentially took over the traditional framework, tried to make it precise, asked new questions and so on. What happened in the Pisa discussions was that the whole framework was turned upside down. So, from that point of view, there is nothing left of the whole traditional approach to the structure of language, other than taxonomic artifacts, and that’s a radical change, and it was a very liberating one. The principles that were suggested were of course wrong, parametric choices were unclear, and so on, but the way of looking at things was totally different from anything that had come before, and it opened the way to an enormous explosion of research in all sorts of areas, typologically very varied. It initiated a period of great excitement in the field. In fact I think it is fair to say that more has been learned about language in the last 20 years than in the preceding 2000 years.
AB & LR: At some point, some intuitions emerged from much work within the Principles and Parameters approach that economy considerations could have a larger role than previously assumed, and this ultimately gave rise to the Minimalist Program [3]. What stimulated the emergence of minimalist intuitions? Was this related to the systematic success, within the Principles and Parameters approach and also before, of the research strategy consisting in eliminating redundancies, making the principles progressively more abstract and general, searching for symmetries (for instance in the theoretically driven typology of null elements), etc.?
NC: Actually all of these factors were relevant in the emergence of a principles and parameters approach. Note that it is not really a theory, it’s an approach, a framework that accelerated the search for redundancies that should be eliminated and provided a sort of a new platform from which to proceed, with much greater success, in fact. There had already been efforts, of course, to reduce the complexity, eliminate redundancies and so on. This goes back very far, it’s a methodological commitment which anyone tries to do and it accelerated with the principles and parameters (P&P) framework. However, there was also something different, shortly after this system began to crystallize by the early 80s. Even before the real explosion of descriptive and explanatory work it began to become clear that it might be possible to ask new questions that hadn’t been asked before. But not just the straightforward methodological question: can we make our theories better, can we eliminate redundancies, can we show that the principles are more general than we thought, develop more explanatory theories? But also: is it possible that the system of language itself has a kind of an optimal design, so, is language perfect? Back in the early 80s that was the way I started every course: “let’s ask: could language be perfect?” and then I went on the rest of the semester trying to address the question, but it never worked, the system always became very complicated.

What happened by the early 90s is that somehow it began to work; enough was understood, something had happened, it was possible to ask the question in the first session of a course: could language be perfect? and then get some results which indicated it doesn’t sound as crazy as you might think. Exactly why, I’m not so sure, but in the last 7 or 8 years I think there have been indications that the question can be asked seriously. There is always an intuition behind research, and maybe it’s off in the wrong direction, but my own judgment, for what it’s worth, is that enough has been shown to indicate that it’s probably not absurd and maybe very advisable to seriously ask the question whether language has a kind of an optimal design.

But what does it mean for language to have an optimal design? The question itself was sharpened and various approaches have been taken to it from a number of different points of view.

There was a shift between two related but distinct questions. There is a kind of family similarity between the methodologically-driven effort to improve the theories and the substantively-driven effort to determine whether the object itself has a certain optimal design. For instance, if you try to develop a theory of an automobile that doesn’t work, with terrible design, which breaks down, say the old car you had in Amherst for example: if you wanted to develop a theory of that car you would still try to make the theory as good as possible. I mean, you may have a terrible object, but still want to make the theory as good as possible. So there’s really two separate questions, similar but separate. One is: let’s make our theories as good as we can whatever the object is…a snowflake, your car in Amherst, whatever it may be… And the other question is: is there some sense in which the device is optimal? Is it the best possible solution to some set of conditions that it must satisfy? These are somewhat different questions and there was a shift from the first question, which is always appropriate (let’s construct the best theory), to the second question: does the thing that we are studying have a certain kind of optimal character? That wasn’t clear at the time: most of these things become clear in retrospect. Maybe in doing research you only understand what you were doing LATER…first you do it and later, if you are lucky, you understand what you were trying to do and these questions become sort of clarified through time. Now you have reached a certain level of understanding, 5 years from now you’ll look at these things differently.


AB & LR: You have already addressed the next question, which is about the distinction between methodological minimalism and the substantive thesis. But let us go through the point since you might want to add something. The Minimalist Program involves methodological assumptions which are by and large common to the method of post Galilean natural sciences, what is sometimes called the Galilean style; even more generally, some such assumptions are common to human rational inquiry (Occam’s Razor, minimizing apparatus, search for symmetry and elegance, etc.). But on top of that, there seems to be a substantive thesis on the nature of natural languages. What is the substantive thesis? how are methodological and substantive minimalism related?
NC: Actually there is a lot to say about each of those topics: so take the phrase “Galilean style”. The phrase was used by nuclear physicist Steven Weinberg, borrowed from Husserl, but not just with regard to the attempt to improve theories. He was referring to the fact that physicists “give a higher degree of reality” to the mathematical models of the universe that they construct than to “the ordinary world of sensation.” [4] What was striking about Galileo, and was considered very offensive at that time, was that he dismissed a lot of data; he was willing to say “Look, if the data refute the theory, the data are probably wrong”. And the data that he threw out were not minor. For example he was defending the Copernican thesis, but he was unable to explain why bodies didn’t fly off the earth; if the earth is rotating why isn’t everything flying off into space? Also, if you look through a Galilean telescope, you don’t really see the four moons of Jupiter, you see some horrible mess and you have to be willing to be rather charitable to agree that you are seeing the four moons. He was subjected to considerable criticism at that time, in a sort of data-oriented period, which happens to be our period for just about every field except the core natural sciences. We’re familiar with the same criticism in linguistics. I remember the first talk I gave at Harvard (just to bring in a personal example), (Morris always remembers this), it was in the mid 1950s, I was a graduate student and I was talking about something related to generative grammar. The main Harvard Professor Joshua Whatmough, a rather pompous character, got up, interrupted after 10 minutes or so: “How would you handle…” and then he mentioned some obscure fact in Latin. I said I didn’t know and tried to go on, but we got diverted and that’s what we talked about for the rest of the time. You know, that’s very typical and that’s what science had to face in its early stages and still has to face. But the Galilean style, what Steve Weinberg was referring to, is the recognition that it is the abstract systems that you are constructing that are really the truth; the array of phenomena are some distortion of the truth because of too many factors, all sort of things. And so, it often makes good sense to disregard phenomena and search for principles that really seem to give some deep insight into why some of them are that way, recognizing that there are others that you can’t pay attention to. Physicists, for example, even today can’t explain in detail how water flows out of the faucet, or the structure of helium, or other things that seem too complicated. Physics is in a situation in which something like 90% of the matter in the Universe is what is called dark matter -- it’s called dark because they don’t know what it is, they can’t find it, but it has to be there or the physical laws don’t work. So people happily go on with the assumption that we’re somehow missing 90% of the matter in the Universe. That’s by now considered normal, but in Galileo’s time it was considered outrageous. And the Galilean style referred to that major change in the way of looking at the world: you’re trying to understand how it works, not just describe a lot of phenomena, and that’s quite a shift.

As for the shift towards concern for intelligibility and improvement in theories, it is in a certain sense post-Newtonian as has been recognized by Newton scholars. Newton essentially showed that the world itself is not intelligible, at least in the sense that early modern science had hoped, and that the best you can do is to construct theories that are intelligible, but that’s quite different. So, the world is not going to make sense to common sense intuitions. There’s no sense to the fact that you can move your arm and shift the moon, let’s say. Unintelligible but true. So, recognizing that the world itself is unintelligible, that our minds and the nature of the world are not that compatible, we go into different stages in science. Stages in which you try to construct best theories, intelligible theories. So that becomes another part of the “Galilean style”. These major shifts of perspective define the scientific revolution. They haven’t really been taken up in most areas of inquiry, but by now they’re a kind of second nature in physics, in chemistry. Even in mathematics, the purest science there is, the “Galilean style” operated, in a striking way. So, for example, Newton and Leibniz discovered calculus, but it didn’t work precisely, there were contradictions. The philosopher Berkeley found contradictions: he showed that in one line of a proof of Newton’s zero was zero and in another line of the proof zero was something as small as you can imagine but not zero. There’s a difference and it’s a fallacy of equivocation; you’re shifting the meaning of your terms and the proofs don’t go through. And there were a lot of mistakes like that found. Actually, British and continental mathematicians took different paths (pretty much, not 100%, but largely). British mathematicians tried to overcome the problems and they couldn’t, so it was a sort of a dead end, even though Newton had more or less invented it. Continental mathematicians disregarded the problems and that is where classical analysis came from. Euler , Gauss and so on. They just said “ we’ll live with the problems and do the mathematics and some day it will be figured out”, which is essentially Galileo’s attitude towards things flying off the earth. That’s pretty much what happened. During the first half of the 19th century Gauss, for example, was creating a good part of modern mathematics, but kind of intuitively, without a formalized theory, in fact with approaches that had internal contradictions. There came a point when you just had to answer the questions: you couldn’t make further progress unless you did. Take the notion “limit”. We have an intuitive notion of limit: you get closer and closer to a point; when you study calculus in school you learn about infinitesimals, things that are arbitrarily small, but it doesn’t mean anything. Nothing is arbitrarily small. There came a point in the history of mathematics when one simply couldn’t work any longer with these intuitive, contradictory notions. At that point it was cleaned up, so the modern notion of limit was developed as a topological notion. That clears everything up and now we understand it; but for a long period, in fact right through the classical period, the systems were informal and even contradictory. That’s to some extent even true of geometry. It was generally assumed that Euclid formalized geometry but he didn’t, not in the modern sense of formalization, there were just too many gaps. And in fact geometry wasn’t really formalized until one hundred years ago, by David Hilbert, who provided the first formalization in the modern sense for the huge amount of results that had been produced in the semi-formal geometry. And the same is true right now. Set theory for example is not really formalized for the working mathematician, who uses an intuitive set theory. And what’s true of mathematics is going to be true for everything. For theoretical chemists there is now an understanding that there’s a quantum-theoretic interpretation of what they are doing, but if you look at the texts, even advanced texts, they use inconsistent models for different purposes because the world is just too complicated.

Well, all of this is part of what you might call the “Galilean style”: the dedication to finding understanding, not just coverage. Coverage of phenomena itself is insignificant and in fact the kinds of data that, say, physicists use are extremely exotic. If you took a videotape of things happening out the window, it would be of no interest to physical scientists. They are interested in what happens under the exotic conditions of highly contrived experiments, maybe something not even happening in nature. Like superconductivity which, apparently, isn’t even a phenomenon in nature. The recognition that that’s the way science ought to go if we want understanding, or the way that any kind of rational inquiry ought to go -- that was quite a big step and it had many parts, like the Galilean move towards discarding recalcitrant phenomena if you’re achieving insight by doing so, the post-Newtonian concern for intelligibility of theories rather than of the world, and so on. That’s all part of the methodology of science. It’s not anything that anyone teaches; there’s no course in methodology of physics at M.I.T. In fact, the only field that has methodology courses, to my knowledge, is psychology. If you take a psychology degree you study methodology courses, but if you take a physics degree, a chemistry degree you don’t do it. The methodology becomes part of your bones or something like that. In fact, learning the sciences is similar to learning how to become a shoemaker: you work with a master artisan. You sort of get the idea or don’t get the idea. If you get the idea you can do it, if you don’t get the idea, you’re not a good shoemaker. But no one teaches how to do it, nobody would know how to teach how to do it.

OK, all that is on the methodological side. Then there is a totally separate question: what’s the nature of the object that we are studying? So, is cell division some horrible mess? Or is it a process that follows very simple physical laws and requires no genetic instructions at all because it’s just how the physics works? Do things break up into spheres to satisfy least energy requirements? If that were true, it would be sort of perfect; it’s a complicated biological process that’s going the way it does because of fundamental physical laws. So, beautiful process. On the other hand, we have the development of some organ, one famous one is the human spine, which is badly engineered as everyone knows from personal experience; it’s a sort of a bad job…maybe the best job that could be done under complicated circumstances, but not a good job. In fact now that human technology is developed you find ways of doing things that nature didn’t find; conversely, you can’t do things that nature did find. For example, something as simple as the use of metals. We use metals all the time; nature doesn’t use them for the structure of organisms. And metals are very abundant on the Earth’s surface but organisms aren’t built out of metals. Metals have very good constructional properties, that’s why people use them; but for some reason, evolution couldn’t climb that hill. There are other similar cases. A case that really isn’t understood and is just beginning to be studied is the fact that the visual or photosensitive systems of all known organisms from plants to mammals access only a certain part of the sun’s energy, and in fact the richest part is not used by organisms: infrared light. It’s a curious fact, because it would be highly adaptive to be able to use that energy, and human technology can do it (with infrared detectors), but, again, evolution didn’t find that path and it’s an interesting question why. There are at the moment only speculations: one speculation is that there just isn’t any molecule around that would convert that part of the light spectrum into chemical energy; therefore, evolution couldn’t by accident hit on the molecule the way it did for what we call the visible light. Maybe that’s the answer. But if that is the case, the eye is in some sense well designed and in other senses badly designed. There are plenty of other things like that. For example the fact that you don’t have an eye at the back of your head is poor design, we would be way better off if we had one, so if a saber tooth tiger was coming after you, you could see it.

There are any number of questions of this kind: how well designed is the object? And no matter how well or badly, to answer that question you have to add something: designed for what? How well designed is the object for X? And the best possible answer is: to let “X” be the elementary contingencies of the physical world and let “best design” be just an automatic consequence of physical law, given the elementary contingencies of the physical world, (so, for instance you can’t go faster than the speed of light, and things like that).

A quite separate question is: given some organism, or entity, anything you are trying to study -- the solar system, a bee, whatever it may be -- how good a theory can I construct for it? And you try to construct the best theory you can, using the “Galilean-Newtonian style”, not being distracted by phenomena that seem to interfere with the explanatory force of a theory, recognizing that the world is not in accord with common sense intuition, and so on.

These are quite different tasks. The first one is asking how well designed the system is, that’s the new question in the Minimalist Program. Of course “design” is a metaphor, we know it’s not designed, nobody is confused about that. The Minimalist Program becomes a serious program when you can give a meaningful answer to the question: What is the X when you say “well designed for X”? If that can be answered, then we have, at least in principle, a meaningful question. Whether it is premature, whether you can study it, that’s a different matter. All of these things began to emerge after the P&P program had essentially cut the Gordian knot by overcoming the tension between the descriptive problem and the acquisition or explanatory problem; you really had the first genuine framework for theory in the history of the field.



The problems didn’t arise clearly until the 50s, although the field has been going on for thousands of years. Until the 50s there was no clear expression of the problem; the fact that on the one hand you had the problem of describing languages correctly, on the other hand you had the problem of accounting for how anyone can learn any of them. As far as I am aware, that pair of questions was never counterposed before the 1950s. It became possible to do it then, because of developments in the formal sciences which clarified the notion of generative process and so on. Once the basic questions were formulated, you had the tension, in fact paradox. The Pisa seminars provided the first way of overcoming the paradox and therefore gave an idea of what a genuine theory of language would be like. You must overcome the paradox. Then there is a framework, and a consequence of that is the rise of new questions like the question of substantive optimality rather than only methodological optimality.

II. Perfection and imperfections.
  1   2   3   4


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page