[Prev][Next][Index][Thread]

G Mink in NTS on textual stemmas



I read a rather interesting article in the last New Testament Studies
by Gerd Mink on how to devise a stemma which explains the relationship
between the extant MSS (or witness anyway) of the NT.
While he didn't claim his method would necessarily be successful,
the worked examples on toy problems he displays at the end
of the article make it clear that he is describing an approach
actually being pursued at the INTTF, based on the textual data Mink
printed up in "Text und Textwert der griechischen HSS des NTs"
and which the INTTF holds (close to the chest) in machine-readable form.

Mink starts out by making  some useful definitions and distinctions,
and when he introduces a little graph theory and enumerates his
assumptions my pulse quickens at the prospect of some real math.
The follow-through disappoints, though.

His approach ends up being iterative, but at least on the first pass
it looks like this --
1 Group the witnesses in families.
2 Construct a local stemma for each location in the text with variant readings,
    which assigns a priority order to the readings (as far as possible)
    based on internal evidence.  Readings with mixed attestation are
    split up to reflect the fact that identical readings could arise from
    different causes/sources.
3 Construct global stemmas which hypothesize a minimum number of
    non-extant intermediate witnesses and conform as well as possible
    to the local stemmas, recognizing the influence of mixture, of course.
    This stemma is to be the simplest possible explanation for the 
    textual facts.

On reflection, I suspect Mink wants to reduce the problem of stemma
construction to being a corrolary of Occam's Razor.  It's rather common,
when people first learn about optimization methods to get excited
by the very wide variety of problems that can be cast in the form of
an optimization.  Just define a cost function and minimize!
Later one discovers that this is often a mistake in practical terms.
Mixed continuous/discrete optimization problems (like this one) are very tough.
And one thinks of the researchers attempting to localize their
"Mitochondrial Eve", or the Russians delineating the descent of language
families in the misty past.  Sure, the computer chugs out a solution
and then everyone's excited --- until someone examines the nearly-optimal
solutions and finds that there are many different acceptable explanations
for the data.

Mink could have benefited by cribbing from Hort's description of
stemmatics.  E.g. he forgets to characterize the individual
MSS as to their method of transcription, their reliability,
their favorite errors, etc.  He also fails to make explicit
what his rules of internal evidence are;  this is an area for
delicate judgements and balanced considerations.
While the warm-up of the paper talks in terms of probability
models for textual transmission, probability models
disappear from the picture once they have motivated
his cost function concept for evaluating proposed stemmas.
Perhaps he has no such model, or perhaps he does not want
to divulge his secret formula, even in anagram form.

Some aspects of his approach could be improved without disruption.
More information could be obtained for evaluating his global
stemmas if readings and Textzustaende were judged not only
as Posterior or Prior, but also as Distant and Close.
His idea that the Koine-Bereich is a single entity ought
to be discarded.  And some weighting scheme ought to be
introduced that distinguished between trivial fluctuations
and eye-catching, diagnostic variations.

Despite the shortcomings I hope his effort develops into something real.


Vincent Broman,  code 572 Bayside                        Phone: +1 619 553 1641
Naval Command Control and Ocean Surveillance Center, RDT&E Div.
San Diego, CA  92152-6147,  USA                          Email: broman@nosc.mil