Part of the problem with using the concept of complexity is the problem of defining exactly what consists of "the system" or the entity which encompasses the complexity. We will argue that some delineation's of systems are more natural than others for understanding how the complex things arise, such as living organisms.
'Dissipative structures' by Cosma Rohilla Shalizi: The following is skeptical report on the attempts to analyze or explain emergent structures.
Ilya Prigogine (NL) coined the phrase, as a name for the patterns which self-organize in far-from-equilibrium dissipative systems. He thinks they're unbelievably important, and says so at great length in his books. Some of us physicists believe him; some are skeptical; I am leaning towards skepticism.
And then there is the matter of [Prigogine's] scientific peers --- not the systems theorists and similar riff-raff, but the experts in thermodynamics and statistical mechanics and pattern formation. One of them (P. Hohenberg, co-author of the latest Review of Modern Physics book on the state of the art on pattern formation) was willing to be quoted by Scientific American (May 1995, ``From Complexity to Perplexity'') to the effect that ``I don't know of a single phenomenon his theory has explained.''
This is extreme, but it becomes more plausible the more one looks into the actual experimental literature. For instance, chemical oscillations and waves are supposed to be particularly good Dissipative Structures; Prigogine and his collaborators have devoted hundreds if not thousands of pages to their analysis, with a special devotion to the Belousov-Zhabotisnky reagent, which is the classic chemical oscillator. Unfortunately, as Arthur Winfree points out (When Time Breaks Down, Princeton UP, 1987, pp. 189--90), ``the Belousov-Zhabotinsky reagent ... is perfectly stable in its uniform quiescence,'' but can be disturbed into oscillation and wave-formation. This is precisely what cannot be true, if the theory of Dissipative Structures is to apply, and Winfree accordingly judges that ``the first step [in understanding these phenomena], which no theorist would have anticipated, is to set aside the mathematical literature'' produced by a ``ponderous industry of theoretical elaboration''.
Somewhat more diplomatic is Philip W. Anderson, one of the Old Turks of the Santa Fe Institute, and himself a Nobelist. I refer in particular to the very interesting paper he co-authored with Daniel L. Stein, ``Broken Symmetry, Emergent Properties, Dissipative Structures, Life: Are They Related'', in F. Eugene Yates (ed.), Self-Organizing Systems: The Emergence of Order (NY: Plenum Press, 1987), p. 445--457. The editor's abstract is as follows:
The authors compare symmetry-breaking in thermodynamic equilibrium systems (leading to phase change) and in systems far from equilibrium (leading to dissipative structures). They conclude that the only similarity between the two is their ability to lead to the emergent property of spatial variation from a homogeneous background. There is a well-developed theory for the equilibrium case involving the order parameter concept, which leads to a strong correlation of the order parameter over macroscopic distances in the broken symmetry phase (as exists, for example, in a ferromagnetic domain). This correlation endows the structure with a self-scaled stability, rigidity, autonomy or permanence. In contrast, the authors assert that there is no developed theory of dissipative structures (despite claims to the contrary) and that perhaps there are no stable dissipative structures at all! Symmetry-breaking effects such as vortices and convection cells in fluids --- effects that result from dynamic instability bifurcations --- are considered to be unstable and transitory, rather than stable dissipative structures.
Thus, the authors do not believe that speculation about dissipative structures and their broken symmetries can, at present, be relevant to questions of the origin and persistence of life.
Some quotes from the paper itself:
``Is there a theory of dissipative structures comparable to that of equilibrium structures, explaining the existence of new, stable properties and entities in such systems?''
Contrary to statements in a number of books and articles in this field, we believe that there is no such theory, and it even may be that there are no such structures as they are implied to exist by Prigogine, Haken, and their collaborators. What does exist in this field is rather different from Prigogine's speculations and is the subject of intense experimental and theoretical investigation at this time.... [p. 447]
Prigogine and his school have made a series of attempts to build an analogy between these [dissipative far-from-equilibrium systems which form patterns] and the Landau free energy and its dependence on the order parameter, which leads to the important properties of equilibrium broken symmetry systems. The attempt is to generalize the principle of maximum entropy production, which holds near equilibrium in steady-state dissipative systems, and to find some kind of dissipation function whose extremum determines the state. As far as we can see, in the few cases in which this idea can be given concrete meaning, it is simply incorrect. In any case, it is clearly out of context in relation to the observed chaotic behavior of real dissipative systems. [pp.454--455]
'The Self-Made Tapestry: Pattern Formation in Nature' by Philip Ball: reviewed by Shalizi:
In particular, Thompson made a point of not invoking natural selection, indeed of leaving any kind of history out of the story. ``A snow-crystal is the same today as when the first snows fell'': so, too, the basic forces acting upon organisms, so why bring history into it? The early years of this century are littered with biologists with little use for natural selection; they are now almost all deservedly forgotten. Thompson owes his continuing influence to the fact that his alternative doesn't beg questions at every turn.
Since Thompson's day, then, there has been a tension in the study of morphogenesis between evolution and (other kinds of?) self-organization, and this is one of Ball's themes, though not the leading one. Partly it is an argument about logical and theoretical questions --- what is natural selection competent to explain? what features of organisms could not be modified by selection? to what extent is self-organization unavoidable? --- and partly it's about where the balance between self-organization and evolution lies in actually existing organisms.
The case for the self-organizers can be put very strongly, at least for multicellular organisms, for metazoans. These are not bloated sacks of protoplasm but (as the biologist say) ``differentiated'' --- there are different chemicals in different parts of the body.
So there has to be some particular differentiating influence. It cannot be the genes (on which natural selection acts), since genes only encode information about proteins, i.e. about what chemicals to make, not where to put them. So it would seem that differentiation, morphogenesis, must be due to some internal process, some reaction of the proteins and their associated chemicals which sorts out what goes where; but this is to say that there needs to be spontaneous pattern formation, that development must be self-organizing.
It may, admittedly, look like we're in trouble with some obvious facts, that this argument leaves genes and natural selection with no purchase at all on morphogenesis. But not even the most enthusiastic of the self-organizers, the ones with the least use for Darwin (e.g., Brian Goodwin) goes that far. [...] The genes twiddle the knobs, so to speak, and then let self-organization do its voodoo.
This is a pretty convincing line of argument; at least, I'd like to think so, since it convinced me for years. No longer; let me try to say why with a fairly concrete example.
The experimental study of biological development is more than a hundred years old now, and advances in it fill scores of fat journals every month. In all this vast wealth of detail, there is not a single case where a kind of self-organization proposed by theorists has been confirmed (though there are a few likely-looking candidates), and many cases where self-organization is definitely known not to take place. As the old joke says, ``If it's slimy it's biology, if it stinks it's chemistry, and if it doesn't work it's physics.''
This doesn't mean, of course, that the whole exercise has been a waste of time, much less that there is no role for theory in developmental biology, that the current find-a-gene-sequence-it-and-move-on mania is the last word on the subject. Between the DNA and the extra fingers there are a whole host of biophysical problems from the shapes of molecules to the mechanical properties of muscles which we need to solve before our knowledge of morphogenesis will be reasonably complete. As a theoretical physicist interested in biology and anxious about long-term employment, I find this comforting.
Shalizi is a self-avowed reductionist working out of the Santa Fe Institute. His pessimistic outlook on his own field of physicalistic explanations of emergence warrants consideration. He has not given up on reductionism, but he is not hopeful that anyone will find the formula for self-organization.
Evidently there are dynamically stable structures in the world, but very few of them show promise of physical explanation. Where does that leave the naturalists? It leaves them uncomfortably near to their vehemently disavowed vitalist forerunners, but still on the lookout for strange attractors (102,000 hits):
Complicity and the Brain- Dynamics in Attractor Space -- Peter Henningsen:
Section 2 builds on the book ``THE COLLAPSE OF CHAOS'' by Jack Cohen and Ian Stewart, who bring a refreshing new perspective to bear on the science of complexity. They complement the standard paradigm of reductionism that explains the behavior of a system through the interaction of its parts with a view that takes into account constraints imposed on a system by its interaction with other systems. Similar complementary views of the process of self-organization were developed by Ilya Prigogine, who pursued the bottom-up, reductionist approach, and Hermann Haken, who focused on how global attractor states constrain the possible dynamics of subsystems. Cohen and Stewart make a good case that the reductionist approach to complex systems is one-sided, and try to map out the territory from which a paradigm complementary to reductionism might emerge. They introduce some useful new terms, such as features and complicity, for which we give a short introduction here.
It appears that I will need to become familiar Robert Rosen's work:
ROBERT ROSEN: THE WELL POSED QUESTION AND ITS ANSWER-WHY ARE ORGANISMS DIFFERENT FROM MACHINES? by Donald C. Mikulecky
2.1.4 Final cause and anticipatory systems
With a little more imagination we can construct interesting descriptions of systems this way in which, as in the example of the house above, final cause carries a connotation of anticipation with it. This idea is highly developed as a characteristic of complex systems by Rosen. It is important for this discussion to note the way in which causes become mixed in a complex system. This is in distinction to the way they stand separate in simple mechanisms (Rosen, 1985, 1991). The way that final cause and anticipation are realized is in the ability of complex systems to incorporate a model of their environment into their behavior. This allows them to anticipate future events and to also correct their behavior as new information sheds light on the anticipatory process. One simple example of such a system at the level of metabolic processes has a mechanistic realization that has been examined in some detail (Rosen, 1985, pp 349-354; Mikulecky, 1993; Prideaux, 1995)
Another facet of final cause being acknowledged is the recognition that future events can cause present behavior. In the case of final cause and anticipation, the causality flows backwards so to speak. What would have once sounded like mysticism becomes perfectly reasonable in a dynamic system. The nature of causality introduces this new directionality in time in a way that the Newtonian Paradigm made impossible.
2.2.3 The [Metabolic, Repairing] system as a relational model of the organism
The History of relational models goes back to a seminal paper by Nicholas Rashevsky wherein he made a radical change in his approach to living systems (Rashevsky, 1954). After pioneering most of the mechanistic models we know about today, including reaction-diffusion systems and artificial neural networks as far back as the 1930s, he took stock of what he had learned and realized that he was not any closer to understanding what living systems were all about. He then decided to take an entirely new direction. His goal was to keep the organization of the living system while basically throwing out the physics. His tool for this was topology.
[...] The concept of analytic models that do not reduce to synthetic models captures this formally. The task then is to formulate an analytic model of the organism that captures the organization even if it must sacrifice the physics. For this task, category theory is the method Rosen saw as capable of doing exactly what he wanted. He applied category theory to the (M,R) system to answer the question of why an organism was different from a machine.
More information on Robert's ideas is available at Robert Rosen - Understanding Life and Physics, and Anticipation.info.
Rosen, Moltmann, and the Anticipatory Paradigm -- DAVID C. COTTINGHAM (1990):
This article begins with discussion of Robert Rosen's Anticipatory Systems, outlines the concept of biological modeling processes, and connects the notion of anticipatory model with the notion of psychological archetype. The Great Mother is given as example. Rosen is cited on the distinction between teleonomy and teleology. Jurgen Moltmann's theology is referred to, in particular his idea that the universe is an anticipatory system. Telos is proposed as a unifying term. The paradigm is then applied to biblical hermeneutics, with typology seen as anticipatory progression; the raising of archetypes into succession of new contexts. The conclusion ties the three approaches together.
On semiosis, Umwelt, and semiosphere -- Kalevi Kull:
A cognitive turn in biology can be foreseen very soon. At least, this is an impression readers of Jesper Hoffmeyer's (1996) book on an approach to biosemiotics may get. The term 'cognitive turn' in this context is taken from psychological thinking a couple of decades ago, when the prevailing behavioristic approach was to a great extent replaced by another model of research, allowing methods and criteria which would not be accepted by behaviorists as 'scientific'. Since then, developments in psychology have been very stormy, paradigm changes became a common thing in the science of mind. In biology, the situation has been much more stolid and unexciting. The sound achievements of molecular biology have met with little enthusiasm among true theoreticians. The Darwinian view, in its neo-Darwinian versions, dominates in universities all over the world. The proponents of the power of natural selection have developed its logical consequences in regard to society and ego (e.g. in sociobiology by E.O. Wilson, or gene-ethics by R. Dawkins), and this has cemented the Darwinian monolith. Opportunistic voices have been rare, and have mainly been restricted to continental Europe and Russia (e.g., the nomogenetic view of L. Berg, A. A. Lubischev, S. V. Meyen in Russia, and its parallels in the West - cf. Brauckmann, Kull 1997). However, only a few of these voices have been based on a belief in the methods developed in the humanities, which have been applied to the solution of biological problems via the epistemic renewal of methods.
We're waiting, too. A big question to my mind is when will the people in the consciousness movement take this 'cognitive turn' and realize that their compartmentalization of the 'hard problem' can only result in dualism.
Back to the lists: Robert Rosen & biological (580 hits):
Review of Incursive, Hyperincursive and Anticipatory Systems - Foundation of Anticipation in Electromagnetism by Daniel M. DUBOIS:
The main purpose of this paper is to show that anticipation is not only a property of biosystems but is also a fundamental property of physical systems. In electromagnetism, the anticipation is related to the Lorentz transform. In this framework the anticipation is a strong anticipation because it is not based on a prediction from a model of the physical system but is embedded in the fundamental system. So, Robert Rosen's anticipatory systems deal with weak anticipation. Contrary to Robert Rosen's affirmation, anticipation is thus not a characteristic of living systems. Finality is implicitly embedded in any system and thus the final cause of Aristotle is implicitly embedded in any physical and biological systems, contrary to what Robert Rosen argued. This paper will review some incursive and hyperincursive systems giving rise to strong anticipation. Space-time incursive parabolic systems show non-local properties. Hyperincursive crisp systems are related to catastrophe theory. Finally it will be shown that incursive and hyperincursive anticipatory systems could model properties of biosystems like free will, game strategy, theorem creation, etc. Anticipation is not only related to predictions but to decisions: hyperincursive systems create multiple choices and a decision process selects one choice. So, anticipation is not a final goal, like in cybernetics and system science, but is a fundamental property of physical and biological systems.
Is this a neo-reductionism or something more transcendental? It may be in the eyes of the beholder.
Robert Rosen argued also that anticipatory systems are characterized by finality and connected to the fourth final causation of Aristotle. So, scientists believe that the finality of anticipatory living systems deals mainly with cognitive sciences in relation to language, intention and conscious. I will show that final conditions are implicitly embedded in any mathematical models and are related to the Maupertuis least action principle in Newtonian mechanics and quantum relativist physics.
Yes, and shades of Leibniz. It would be amusing if mechanics were found to be irreducible in this fashion. Any formal system is, almost by definition, unnatural and unphysical to a significant degree. This has to do with the irreducibility of any semiotic system, including language, mathematics and logic. This semiotic irreducibility is at the heart of failure of Analytical Philosophy and Artificial Intelligence. It is synonymous with the irreducibility of reasoning. That mechanics is unnatural should not be a surprise. The bottom line, however, is that Nature must be unnatural in some very significant sense. How could this be? It pertains, at least in part, to the impossibility of distinguishing between the epistemic and the ontic, rather in the fashion of Quine [and here]. This despite Quine's compulsive posturing as a naturalist.
What Daniel is doing here is old news to physicists, as he readily acknowledges. The 'absorber theory' of radiation goes back at least to Wheeler and Feynman in the 1940's. Why has it taken this long to make the connections? One reason is that the Absorber Theory was always considered an embarrassment to Physics, because of it's obvious 'unnaturalness'. The same is also true to a degree of the least action principle. One could say that Leibniz went off that 'deep end'. This is all about the BPW, etc. The time for the rest of us to take the Leibnizian leap is drawing nigh.
Along these lines I recommend you peruse the following seminar outline: Phenomenological Foundations of Cognition, Language, and Computation organized by Terry Winograd.
Then check out CYBERNETICS & HUMAN KNOWING A Journal of Second Order Cybernetics, Autopoiesis & Cyber-Semiotics.
I know what you have said to be true. The motivation of the true founders of Topic Maps clearly had a grounding in the social and cognitive realities of what should be, but is not, regarded as the nature of "ontology" by the IT standards folks.
The point that I have for so long suffered the TM community with is that the market forces distort the original intent of anything that is properly having of a "epistemic" gap between one complex subject (a human) and the computer world. We forget about the Nash equilibrium theorem, and a whole host of other "stratification" considerations. We forget about Wittgenstein and Whorf. We forget about Robert Rosen and Roger Penrose. We forget about all cognitive science except that fringe that cow tows to the IT influence lobby at NSF/DARPA and to the strong AI Dream (that a machine can think and that a machine is the proper model for studying biological intelligence.) Write the AI Dream in you NSF proposal and get funded. But flip this, and write that IT should more deeply adopt the biological model of intelligence and you are wasting your grant writing time. Yes?
Tell me about it!!
The notion of an non-addressable subject is the notion that the problem of ontology is not reducible to what exists in the machines. yes? The insistence in no way changes the reality that the topic map was supposed to have the nature of a mental event. Late binding of scope is the core technical issues that can not, in theory, be resolved. Wow, a unsolvable problem. And one that we can define a cottage industry around!!!
The work on visual abstraction can be viewed at: http://www.ontologystream.com/SLIP/index3.htm where we have free software for research purposes. The export of formative ontology into a Topic Map with HyTime rendering of actionable affordance is reasonable; but would take anyone who worked on this away from the demands of these powerful social forces (such as the CIA representatives etc.) intent on reductionism. Someone needs to hit them folks over the head and tell them that their view of the world is a national security vulnerability. Other systems of ontology representation in Russia, for example, is specifically open as a conscious attempt to over come the Western IT infrastructure in time of a great war. (This goes back to the very large programs funded in since the 1960s). These systems have not been economically viable for reasons that have to do with the selection of which innovation school gets funded by capitalists.. so the deep struggle between capitalism and socialism is a root cause of the vulnerability.
Capitalism may lose on this one.
Nature does not and will not take sides on this one. Scientific and methodological reductionism linked to capitalism and then put in control of our democracy defines the vulnerability, and there is simply no relationship between Paul Prueitt and the fact that this vulnerability has existed and will continue to be a problem for the Nation.
Love that politics!! Somewhat less demagogically, this has more to do with Descartes than with Capitalism, per se. Capitalism is really just along for the ride. What the capitalists don't know is just about everything. They just work here. The only reductionism they can grasp is just the bottom line.
Ontology Based Document Understanding -- Paul S. Prueitt, PhD:
Differential ontology aids text summarization and generation systems as well as text translation and situational modeling. The theory of process compartments, each compartment having its own ontology, provides a means to ground differential ontology to compartmentalized network dynamics. A mathematical framework based on weakly coupled oscillators illustrates the variety of structural outcomes from differential geometry. If ontology is associated with a compartment, and multiple compartments are possible, then the theory of process compartments provides a means to understand why some concepts are easily translated while others might not be translatable without significant effort. However, the assumption that multiple compartments exist is not justified easily.