In this talk called "Textual past harder to excavate than human origins: known phylogenies, known sources, unknown mappings”, I will present a series of philological and formal arguments to corroborate the thesis that reconstructing textual traditions on the basis of variant readings may be harder than reconstructing filiation between humans on the basis of genetic information.
I will also exemplify some formal obstacles to the determination of credible phylogenies of text traditions, exploiting the results of an artificial text tradition developed as part of the Studia Stemmatologica project initiated by the University of Helsinki and supported by the Finnish Cultural Foundation.
Some more background follows:
Texts of various kinds have been handed down through history using almost any thinkable physical supports. Already in Antiquity manuscripts were manufactured, copied and disseminated over a considerable geographical area. The pillars of civilisation, Homeric texts, the Hebrew and Greek Bible, Ancient Philosophers, the rich treasury of medieval texts have been handed down through a complex process of imperfect copying, producing various changes, inventions, corrections, and occasionally oddities at each stage of the transmission.
Particularly copy errors and sporadic physical damage to the manuscripts have driven textual evolution. Identifying and classifying a wide range of such errors in manuscripts usually provides the textual scholar with invaluable forensic evidence. Corrections done by scribes, though, are a different kind of beast. The conscious changes brought to an exemplar may offer additional evidence. Quite often, changes, this is the sad part of the story, may hamper the reconstruction of the early stages of the text. Scribes and early editors produced conjectures and bold inventions obliterating permanently exploitable traces of the textual filiation. Occasionally, scribes may have emendated their exemplar by replacing doubtful readings with "fresh" but isolated readings from now-lost sources.
The road to philological hell being paved with good intentions, editors, copyists, or censors may have corrected readings that have been wrongfully perceived as intentional corruptions, mechanical errors or poor language. Combining one or more sources they may have ambitioned to produce a hybrid text judged more worthy to represent the original than each of the exemplars in their hands. The outcome of such a remix process, known as "contamination" is a hybrid of many kinds of variants texts. Considering massive and age-old textual traditions such as the Hebrew Bible or the New Testament, such "contaminated" portions of the textual tradition pose formidable challenges to the textual scientist endeavouring to reconstruct early stages of the text, or worse a prototype.
In many cases the total variation of old texts, say biblical texts, may be disconcertingly huge being nurtured by thousands of manuscripts and allowing millions, if not billions of combinations of possible genealogical links competing to represent a common ancestor - the elusive archetype. Having eliminated all obvious recent changes, the text scholar may end with a selection of readings and manuscripts, some of being spoiled by physical damage that may be selected as candidates to represent the best text. Sometimes a single witness or manuscript may contain all these readings, other times the text scientist may have to operate with an eclectic blend. Frequently, there may be no obvious candidate for the archetype, due to extent "inbreeding" and loss of crucial witnesses. In this case the many traces are reduced to isolated vestiges scattered across many material witnesses.
Textual scholars have been tempted to exploit systematically assumed similarities between textual evolution and biological evolution. The adoption of a formal genealogical method is has been thought to be directly comparable to the similarities assumed by some historical linguists between language and gene evolution. Already in 1871 Darwin assumed such similarity. Adopting an approach closely related to the methods used by historical linguist to represent the genealogical relationship between diverse languages, text scholars have also resorted to using trees in order to display the genealogical relation between diverse manuscripts, or readings, the elusive or attainable goal being to produce credible textual phylogenies.
This assumed model for textual evolution, however, is not unproblematic. The textual evolutionary paradigm borrowed from biological phylogenetics tends to reproduce an ideal model of genetic evolution seen as a dynamic regular process driven by small incremental changes accumulating through time and across space, leading to the extinction of ancestral readings, the survival of some isolated branches, and to the emergence of new readings. The observed processes of textual emendation, contamination and "repair" of damaged readings tend to be treated using phylogenetic algorithms as regular incremental changes.
The mere fact that text traditions, unlike human language, are not normally transmitted predominantly through biological reproduction (children learn from their kin), but as a by-product of mental and institutional forces creates in large and old traditions a complexity that may be inadequately mapped using phylogenetic trees and algorithms assuming small incremental changes (e.g. a equivalent of a molecular clock).
Unlike human reproduction, where any individual is the irreversible product of a father and mother and is by necessity younger than his or her parents, the evolution of textual traditions may mix past and present and warp the time-space continuum of tradition in more exotic manners.