Biosemioticians, like the Systems Theorists before them, wonder why the scientists pay them no heed. The reason is that these philosophers expend most of their energy apologetically aping them, rather than actually engaging the scientists in serious argument.
What is the excuse for all these anti-rational materialists? It's all a part of the larger eschatological, teleological drama. The only real lesson is that if you don't take cosmic intelligence seriously, you will surely be its dupe. It also shows that intellectualism and common sense often get their wires crossed. There is no substitute for old-fashioned wisdom.
Another point is that the philosophers have been spending most of the last century simply tidying up around the scientific ranch. This is normal philosophy. As they clean up the little problems, the bigger problems become more glaring, but, unfortunately, the cleanup crew does not have in its armamenta the industrial strength cleanser that is now needed. What is now required is vision on a cosmic scale. That can only come from whence all visions come: out of the telic blue. The only possible precedence is likely to be found in the prophetic lineage. There is, however, some difference of opinion as to the date of the last such vision. If I'm in the right ballpark, the well known incident in the seventh century was anachronistic in that it seems to have been mainly an updated recapitulation of the Old Testament lineage. The prior intervening incident was notoriously interrupted, due largely to its radical nature, and, was, accordingly, self-proclaimed as teleologically incomplete. From what we can see now, its completion would logically have to wait for a postmodern venue. Am I being sufficiently diplomatic about this logistics problem. I trust that no one will take offence at this quasi-non-linear historicity. Think of that other incident as having been an artistic intermezzo. And the more recent 'mishap' might not otherwise have been possible, and, again with much due apology [biosemioticians not being the only ones skilled in this sometimes fine art], to say that this most recent piece of physical drama was world-class might be an understatement, in my Monday morning estimation, and by way of further apology. Is the price of all this theater rather too high, you might well wonder? Well, consider then, if you will so kindly, the price of Creation. That Show goes on. [I can see room for ambiguity in the designation of events, but let that be, and the more general point still stands.]
We might wonder if there is no downward causation in the case of a computer running some specific program. There is the ample intentionality of all the users and programmers, but what beyond that?
I would say that there is nothing essential to the machine other than what we impute to it or design into it. I would be reluctant to say that of a biological organism. The ontogenesis of any organism or organ, and then the maintenance of it surely involves downward causation, if anything does. So too would its proper continued functioning, that is, at the very least, within the holistic psychosomatic, immunological system of the organism, if we are speaking of a specific organ.
The last paragraph was not well conceived. Let me try something else, please.
The downward causation of interest is teleological. I have suggested before that Creation has proceeded from both the Alpha and the Omega, and primarily from the latter. The existence of our world is due mainly to its final cause, the Eschaton. The Alpha is a mere shadow of the Omega. In a similar fashion, matter is the shadow of mind. Final causation is most evident to us in the ontogenesis or ontogeny of biological organisms. So I would reverse the the evolutionary dictum to say instead that phylogeny recapitulates ontogeny.
Ordinary efficient causality is the habituation of the mind. Mathematics is one means by which nature fills in the gaps between our visions and intentions, yielding physics. As we ascend the chain of being, final cause gradually replaces efficient cause. We use our artifacts to amplify the final cause, the Internet being but the latest example, by which means and through efforts like this, the Final Cause will be made the explicit source of our intentionality, thus further accelerating our historical denouement. As telephony gives way to telepathy in the final eschatological stage, teleological acceleration will be more than impressive: mind supplants matter, downward causation supplants all else.
Final cause is so hard to detect instrumentally, because it is so pervasive in the organic realm. Removing it would be like removing the head from a body: all systems grind to a halt. Nay, all systems vanish. Matter is the shadow of mind. The organism is the shadow of its spirit or essence. God and spirit clone themselves organically.
What medicine accomplishes with chemicals is but a shadow of what will be accomplished with spirit. Between chemistry and spirit resides alchemy. A neo-alchemy will provide the historical continuity, as it already is in undetectable fashion.
Biosemiosis when freed of its materialist shackles will be the wave of the future. Emmeche & Co. have a reasonable fear of the unknown. To them the immaterial future is remote and faceless. Someone will give it a face and render it immanent. That presence is the second coming. As we anticipate we emulate and as we emulate we become. There is no other resolution.
Scientific instrumentation will not be instrumental in detecting the telos. That is the purpose of the mind, no, the telos is the essence of the mind. As matter shadows mind, so does mind shadow the Telos.
You see why Emmeche and the biosemioticians are having trouble. They peer into every corner for a sign of the Telos, when, in Truth, it is about to swallow them whole.
If we are going to actually see the vital force objectively it would likely be by inferences based on measures of biomolecular efficiency, such as with enzyme efficiency. Part of this putative excess molecular efficiency may be attributable to quantum computing effects. If performing a cellular function may be compared to computation, then we know that quantum superpositional effects may be employed to boost computational efficiency, and so one would expect to find similar boosting effects in vivo; while, admittedly, we are setting aside the not inconsiderable decoherence problem.
Quantum physics, you may say, is still just physics. Maybe yes, maybe no. Delayed choice and observational effects are supposed by many knowledgeable folk to overlap with mental processes. The quantum domain could well provide a loophole in the physics, sufficiently large to give vitalism and consciousness a foothold in the otherwise sterile Newtonian desert of classical physics. This gives rise to a quantum dualism, where the quantum loophole plays the connecting role between mind and matter that Descartes speculated might be played by the pineal gland. It was through this same loophole that I was able to carve out a quasi-rational path of migration from materialism to immaterialism. A quantum vitalism might well provide a similar stepping stone for our befuddled biosemioticians, presently suffering a failure of imagination, along with a subliminal stage fright.
For a quantum immaterialist, such as myself, the Cartesian duality between mind and matter is replaced by a dichotomy between quality and quantity. When one conceptually passes through the quantum loophole into the metaphysical realm of pure felt meaning, one looks back at the physical side and can see only abstractions, mainly of the mathematical kind, among which the Monster Group stands out like a Rock of Gibraltar. (If we are ever able to quantize gravity, it should then be possible to 'quantize' the Monster, and so render it more amenable to our subjectivity. Just a thought. And, speaking of which, here is a sight for sore eyes: Fotini Markopoulou Kalamara. You don't suppose she's related to Calamari? Bet she could turn you into one, though. If I were the Monster, I would be very afraid.)
Now where were we? Yes, there seems to be a major conceptual divide between Semiotics and Biosemiotics, between the imaginal, semaphoric reach of an Umberto and that of a Claus. How do we close that gap, without having to teach them, and everyone else, quantum gravity a la Kalamara? That is doable, but there must be an easier way. (I wonder if the semioticians could borrow any ideas from memetics.)
Here is another thought: Idealism suffers considerably from the Platonic Carburetor Syndrome (PCS), in which ideas are taken to be individually written in stone in some Platonic heaven, or taken to be like those windowless Leibnizian atomic monads. Let us get some relief by taking some memetics and Quinine water. Memetics focuses of the dynamic quality of the memetic manifold. Quine focuses on its holism. Unfortunately the Quinine holism is too easily interpreted in a purely structural, formal, syntactic fashion by the AI types. With all that holistic form, where's the content? Are our minds just syntactical engines? I don't think so. Where does all that (illusory?!) felt meaning come from? If not from the atoms, it must come from the manifold. Was it Jim Maxwell who prematurely revived the Ether, soon to be shot down by Mike & Mo? Perhaps we need a Platonic memetic ether. The Hindus call it the Void or nirvana, the Great Emptiness. Tim & I call it the Big Bliss. Somehow and dynamically, the virtual atoms of meaning create themselves out of the Big Bliss. Yes, we concentrate the energy, like some atom smasher, and that concentrated gamma energy is Comptonically and dialectically converted to idea and anti-idea. Was it not said that the opposite of every great idea is another great idea? (Uh, oh, where is the anti-Creation? The Eschaton is our recombination?) We're still just talking a kind of Feynman-diagrammatic syntactics. For the electron to acquire content or mass, we must invoke the Higgs vacuum bosonic mechanism. (Am I beginning to sound like Alan? I swear, this is not a hoax. Please excuse me! My parents had to pay big bucks for all that physics, this is the least we could do.) Somewhere in the Rig Veda you are bound to find a Higgs reference. Please let me know when you do. In the meantime, I invoke it to explain how we smuggle semantics into our little syntactic noggins. (Any more questions? Just ask Dan, the Science Man, or, better yet, ask Fritjof. And, besides, no one is paying me enough to be serious about all this stuff! Haven't I used up my nickel yet? Help, Lord!)
Is it not reasonable to suppose that the semantic manifold has a holographic and fractal structure? Our individual minds are like the repeats of the core pattern in the Mandelbrot Set, just chips off the old block. Most every pixel of that set contains a whole universe. The Mandelbrot still invokes a spatial manifold, even though it is complex-imaginary. Imagine an infinite dimensional, Hilbertian manifold, if you will.... I'm running out of mathematical metaphors. One does when confronting raw meaning. All of math is contained in three words: 'one', 'zero', 'and'.
Consider the Mandelbrot generating formula: z' = z^2 + C (C -> M when z remains 'small'). Something as basic as this could have been the Cosmic Logos: Thus Spake Zarathustra. What would 'C' and 'z' actually represent in such a case? Even God might not know for sure. She sure wouldn't be able to explain it in English. Maybe in Swahili, but not English. Then again maybe 'C' stands for love in Swahili, that's Amore!
Back to downward causation, and looking at contributors to Claus Emmeche's book: Downward Causation:
Peder Voetmann Christiansen is a Quantum Semiotician. He is attempting to rectify Peirce's logic with quantum physics.
Alvaro Moreno: Downward Causation at the Core of Living Organization:
In this paper we argue that biological systems cannot be explained only in terms of physical laws, but that their organization also depends on the action of informational records which control the construction of the organism's phenotypes. This information is shaped by natural selection through a collective and historical process. By controlling the lower level of molecular interactions, information acts as a kind of explicit formal cause which restructures matter according to a given pattern. As the construction of informational patterns is an open process, essentially organization, independent of the dynamics of their material support, information exhibits compositional capacity which, besides allowing open-ended evolution, constitutes the main difference between formal and physical causation.
Should a computer exhibit downward causation?
Alvaro has also contributed to a Special Issue of BioSystems: The Physics and Evolution of Symbols and Codes: Reflections on the Work of Howard Pattee.
Special Issue on "The Quantum of Evolution" may be of interest.
Teed Rockwell is someone to reckon with: A Defense of Emergent Downward Causation:
Troubles with Functionalism........
However, the view of scientific progress revealed by the New Wave reductionism of Bickle, Hooker, and Churchland seems to indicate exactly the opposite i.e. that belief in emergent causation is not only compatible with modern science, but may be the only thing that can save it from skepticism. If we believe that the entities described by the reduced discourse are genuinely real ontologically, and yet not reducible in the old tough sense produced by elimination or identity, an inevitable consequence of this is that the reduced and the reducer are to some degree ontologically independent of each other. The only way we could eliminate this ontological independence would be to say that these blurs are apparent, rather than real, and that everything that occurs at the macroscopic level is actually identical to something at the microscopic level, even if we don't know what it is. Given that the newest work in history and philosophy of science reveals that we have never found such identities, and probably never will, we therefore have a choice between accepting the partial ontological independence of each layer, or claiming that reality has an essential nature that science will probably never reveal. If we accept the latter alternative, however, we can no longer use scientific realism as a way of dismissing emergent causality.
Yes, I do believe that the functionalist view is essential to science, and is essentially irreducible to physics. Natural selection could not operate without functional properties to select. I'm still not sure how this argument applies to machines. And how do we distinguish the epistemic from the ontic? Cannot machines be adequately explained by classical mechanics? James Garson's reply to Teed raises these same questions. It is not easy to find a firm footing in all this scree. Yet, we must try.
Perhaps machines are not adequately explained by mechanics. This had never occurred to me. Mechanical engineering is not strictly a science, if you think about it. Consider the genius of invention. We would not need a Patent Office if design were a science, if it could be mechanized. But, given a machine, its functioning is purely mechanical. No? Well, this may not be so obvious. 'Functioning' is a peculiar word. Notice that another, necessary word has been omitted from the previous statement. I should, more precisely, have said: its proper functioning is purely mechanical(?). Now we have a clear contradiction. We have introduced normativity. Physics and normativity are logically disjoint concepts. Normativity is essential to the concept of a machine, but not to mechanics. Machines fail, mechanics doesn't. Physics or mechanics alone cannot describe the propriety of anything, and particularly not of a machine. So, what is going on here? How do we distinguish the semantics from the physics?
Machine = physics + function.
Otherwise it is a hunk of metal, or a doorstop, if you will. But does this mean that machines violate the laws of physics? No, but it does mean that they must transcend physics. Physics alone cannot describe or explain a machine. So where is the problem?
This conceptual problem is more serious with evolution and natural selection. Functionality emerges. [Realize that I am winging this....now and always.] An organism is a functioning collection of molecules. It is a molecular machine. Is this not also just: Chance + Necessity? Atoms in the Void? No, because that explains nothing, and the function of science is explanation. Physics is able to explain atoms and the void, and it does so in functional terms. Well, certainly in counter-factual terms. But, yes, with the quantum we must employ the concepts of measurement with all its explicit and implicit intentionality. But forget the quantum. Counter-factuality employs the concepts of events and causality, both of which transcend physics proper (sic). At the very least we are talking non-reductive naturalism, rather than reductionistic physicalism.
And don't forget all the abstractions, mathematics and realism that is employed in Physics. And where can we draw the line between Physics the discipline and the physics that is its subject? Where do we draw the line between the noumenal physics and the phenomenal Physics? That is the precisely the issue of Metaphysics. As an idealist, I say there is no line. You disagree? Then show it to me!
All of the blatant, notorious metaphysical problems of quantum physics are already latent in the essence of classical physics and mechanics. It is just that there is no rug left down there, under which the problems may be swept.
This is nail number what, in the coffin of materialism? Are we not losing count? What is that unpleasant odor?
How do Teed and his correspondents resolve this mess?
Let me first say that in these discussions of ontology, much is made of the distinction between ontology and epistemology. This may be a valid distinction under some metaphysical suppositions, but it is not valid for idealism. In this latter case there is no dichotomy between the subjective and objective realms. I recognize that at present we see through the glass darkly, but most of that obscurity is due to our confusion with materialism. As soon as we remove those blinders, the vista will open before us. With idealism we have direct realism. It is then a vision problem and not a knowledge problem. The notion of perception by representation is coming between us and our being truly and directly present in the world.
Back to Teed. I notice that his reply page for the discussion of his downward causation essay remains blank. I believe that this was also the case when I last checked several months ago. We move to consider the several responses listed.
As already noted, Jim Garson insists on a strong distinction between epistemological and ontological emergence, thereby foreclosing the discussion.
Robert Kane, like Jaegwon Kim, finds the concept of downward causation incoherent. The paradigm case of downward causation is telekinesis. Is that incoherent? If it is incoherent, why are most people able to grasp the idea? For a somewhat less controversial example, what about lucid dreaming? Is that incoherent? Let me not fail to mention the most common example of all: psychosomatics. Has psychosomatics never been cogently discussed in the medical profession? If I have correctly characterized Robert's concern, it is very reminiscent of Daniel Dennett claiming that the concept of consciousness is incoherent, or the logical positivists unilaterally declaring all metaphysical statements vacuous. As an immaterialist, I am sometimes tempted to declare all physical statements vacuous, but usually I manage to restrain myself. Just don't get me started on telephony! Such a non-response is a tried and true rhetorical device, or perhaps just tired. One more try, Robert: is Creation incoherent?
Am I being unfair here? Please, somebody, help me out! But this leaves us with U.T. Place, and poor Google thinks I'm looking for Salt Lake City. But anyway, Utah Pl. is critiquing some other paper of Teed's. Just a slight displacement of URL's, I suppose. There you have it, then, for the state of the known (to me) art of downward causation, which is arguably the issue that is now the most pertinent to everyone's future, given even the remote possibility of an eschatological denouement. It may be a long-shot, but very likely it will be our only shot.
Here's a thought: if concepts were reducible then why have we not reduced any? May not the concept of 2 be reduced to 1 + 1? Not really. Not without additional information and constraints. Not without a considerable formal apparatus. Dictionaries perform word reductions only to a very limited degree. For instance, bachelor and unmarried man can, and usually do, have very different nuances, differences that will depend on context.
Cannot the complete thought expressed by a sentence be successfully decomposed into its individual words? The meaning is then lost. Each sentential composition of words might as well be its own ideogram. This ideogram can be decomposed into words with not much more success than a word can be decomposed into letters.
An AI person might argue that a sentence can be syntactically and grammatically reduced without any semantic residue, and then be reconstructed in another language. This is how machine translation is supposed to work.
But there are the problems of holism, context and levels of meaning. A given sentence could have vastly different meaning depending on its function in a particular story. And there is no upper limit to the hierarchy of levels. The same story could have very different meanings or functions in different cultures, and for each individual listener, as well.
Consider the irreducible nature of songs. If I were a better connoisseur of music, you could read me a list of say 100 musical compositions and I would be able, almost immediately, to attach some distinctly recognizable feeling or meaning to each one, almost as if each were a sort of ideogram. Are these musical ideograms decomposable? Not without loss of essence. There are notes and there is a tune. A tune can be decomposed into notes, but clearly there is a very complex relational pattern that constitutes a tune. And these are not self-contained patterns. The recognition of a tune or story entails the triggering of countless associational significances, without the essence of the tune devolving into incoherence. These associational meanings are not external or incidental to the essence of the song, but somehow contained therein.
A machine can perform the function of pattern 'recognition', in an apparently decomposable fashion. It could be programmed to accurately recognize excerpts of a hundred different tunes. It could be programmed to perform that function in many logically distinct ways. Would anyone thereby suppose that this artificial process had any essential similarity to the mental process? One would have to, at least, simulate an integrated human intelligence before we might reasonably entertain the notion of actual musical cognition.
But what would it mean to construct an integrated human intelligence? Would this mean to simply add more recognition modules, and then add more memory modules? There would then have to be a master recognition module, to somehow integrate and make sense of the outputs of the sub-modules. At what point and in what manner might this increase in the quantity of information be transformed into something that might resemble a natural coherence?
Does the lower level information not have to be patterned and sequenced in some particular manner at a higher level to make it coherent? But then we are just starting a second level of patterning. It would be like patterning sentences into a coherent story. In Turing Test fashion, the computer could take a musical input and convert it into a coherent and relevant verbal output, let us suppose. Have we thereby effectively reproduced a instance of human-like musical cognition? A behaviorist might be impressed, but what about the cognitivist?