Method needs Methodological context

Aaron Agassi (
Mon, 5 Jul 1999 15:05:38 -0400

From: "Aaron Agassi" <>
To: <>, <>
Subject: Method needs Methodological context
Date: Mon, 5 Jul 1999 15:05:38 -0400

All manner of recursions have been found to be creeping into statistical
sampling Methods, such as to engender apparently self fulfilling prophesy.
Sometimes, good data is even flittered out as error, because it does not
seem to fit. Economic theory, in particular, is under attack because it's
achievement of internal consistency has run amuck, unchecked by any
sufficient reality testing.

The projection of patterns, via the Method, into the results, would be self
fulfilling, and internally consistent. But if the assumptions are known, and
considered justified by some other evidence and hypothesis, then this is all
in order. Internal consistency becomes moot, only in the face of refutation
from reality testing.

Yes, pattern projection out of the Method would reflect itself, in an
internally consistent manner. But, what is the relevance of the question of
the causes of internal consistency, in the first place? Aside from the sheer
curiosity, of the question in and of itself, the implications are
Methodological and Epistemological. Internal consistency is only a mistake,
when not in accordance with reality testing.

How, then, if at all, does pattern projection out of the Method, tending to
reflect itself, constetute a mistake? First of all, if the Method is a
mistake, for some other reason. But not even then, strictly speaking. In
fact, it would be correct procedure from a mistaken premise. "Garbage in,
garbage out". Another implicit reason why pattern projection out of the
Method, tending to reflect itself, might be a mistake, would be if it is
arbitrary. An arbitrary notion, reinforcing itself, would be likewise,
arbitrary. The results could only be correct by sheer lucky accident! But if
the premise is sound, then extrapolation from them might be productive.

The question remains, is reality testing really possible? Because, reality
testing remains the crux of the matter. If all perception is reduced to
projection, some argue that the cause is lost. But if it is allowed that
cognition is in any way shaped by existence in reality, then there may still
be hope.

In the case of statistical analysis of interference patterns in light, those
experiments where never intended to prove that light is a wave. That light
is a wave is the premise experimentally supported before hand, "justified",
if you will, by previous experiments and observations. We don't even really
believe that light is a wave, because how can light be both wave and
particle at the same time? (Other experimental evidence consistently
supports that light is a particle. Which it can't be, ether, and for the
same reason.) The statistical analysis of luminous interference patterns
seeks only to learn more about he wave like properties and behaviors of
light. And that is, indeed, founded upon a raft of presumption.

Some part of the presumptions in statistics, that are used in the analysis
of interference patterns, and other things, upsets Chris Lofting. I'm not
clear what aspect of statistics is problematical, according to Chris
Lofting, or why. But it has occurred to me that while he speaks of Method, I
am not clear that he treats Methodology. And so, I will:
And so I ask, hypothetically, if all other assumptions are taken to be
sound, is the statistical practice where of Chris Lofting speaks, really
automatically a deadly mistake which will endlessly spawn a mounting cascade
of more mistakes? Or, is that simply a matter for reality testing and
refutation, just the same as ever? Does Chris Lofting's objection, whatever
it may be, precisely, really make any difference?

This was distributed via the memetics list associated with the
Journal of Memetics - Evolutionary Models of Information Transmission
For information about the journal and the list (e.g. unsubscribing)