Re: Selection of scientific theories - metascientific experiment

From: joedees@bellsouth.net
Date: Fri May 04 2001 - 22:31:56 BST

  • Next message: Kevhurls@aol.com: "Re: memetics-digest V1 #667"

    Received: by alpheratz.cpm.aca.mmu.ac.uk id WAA25293 (8.6.9/5.3[ref pg@gmsl.co.uk] for cpm.aca.mmu.ac.uk from fmb-majordomo@mmu.ac.uk); Fri, 4 May 2001 22:29:25 +0100
    From: <joedees@bellsouth.net>
    To: memetics@mmu.ac.uk
    Date: Fri, 4 May 2001 16:31:56 -0500
    Content-type: text/plain; charset=US-ASCII
    Content-transfer-encoding: 7BIT
    Subject: Re: Selection of scientific theories - metascientific experiment
    Message-ID: <3AF2D97C.29375.995C69@localhost>
    In-reply-to: <3.0.1.32.20010503095251.0068f188@agner.org>
    X-mailer: Pegasus Mail for Win32 (v3.12c)
    Sender: fmb-majordomo@mmu.ac.uk
    Precedence: bulk
    Reply-To: memetics@mmu.ac.uk
    

    On 3 May 2001, at 9:52, Metascience wrote:

    I recommend that you obtain, as a reference, Thomas S. Kuhn's
    most recent (postumously published) work, THE ROAD SINCE
    STRUCTURE, which has much to say on this issue.
    >
    >
    > Selection of scientific theories - Proposal for a metascientific
    > research project.
    >
    >
    > It is generally acknowledged that the progress of scientific knowledge
    > is a selection process. Good theories are preserved, while bad
    > theories are rejected. This fits perfectly into a memetics framework.
    >
    > I want to study the selection process that controls the development of
    > the soft sciences, and I am writing this message to solicit your
    > advise on how to carry out this research, your comments on the
    > methodology I am proposing, and possibly your participation and help.
    >
    > The hypothesis I want to test is as follows:
    >
    > In the hard sciences, theories are tested by means of experiments in a
    > well-defined manner. There is almost universal agreement on the
    > criteria for selecting the best theories: logic, experiment,
    > reproducibility, verification, falsification.
    >
    > In the more soft sciences, however, experiments are difficult to carry
    > out and difficult to interpret. The softer the science, the more
    > difficult it is to make rigorous tests because of the complexity and
    > fuzziness of the phenomena. In sciences like psychology and sociology,
    > researchers may refrain from testing their theories through
    > experiments for practical, economic, or ethic reasons, or because they
    > have not been trained to do so. But, obviously, theories within these
    > sciences are still being selected. The important question is now:
    > Which selection criteria are controlling the development of the soft
    > sciences?
    >
    > Social and psychological phenomena are so complex, that any simple
    > theory about cause and effect will have exceptions. Thus, causal or
    > mechanistic theories within these sciences are very vulnerable to
    > falsification. Opponents of a theory can always find an exception,
    > which the theory can't explain. There are two possibilities for
    > dealing with such a problem: (1) refining the theory so that the
    > exception is accounted for, or: (2) rejecting the theory completely.
    >
    > Now, my claim is that certain scientific communities are choosing
    > option (2) so often, that most falsifiable theories are rejected. The
    > long-term outcome of this selection process is that most of the
    > theories that remain are non-falsifiable, and thus not scientific in
    > Popper's sense.
    >
    > I have met many sociologists who completely reject all mechanistic
    > cause-and-effect theories. What remains in their scientific universe
    > is definitions, interpretations, and holistic theories - nothing
    > falsifiable. Paradoxically, they are still paying lip service to
    > Popper's criterion of falsifiability. (The holistic theories say that
    > every phenomenon has an infinite number of causes. Any claim that a
    > certain observation falsifies the theory can always be rejected by
    > saying that some causal factor has not been accounted for).
    >
    > Other selection criteria that control the development of the soft
    > sciences are: psychological appeal, politics, ideology, funding,
    > tradition, authority, prestige, and sophisticated terminology. Thus, a
    > new theory is most likely to be accepted if it appeals emotionally to
    > the referees, if it supports prevailing political ideologies, if it is
    > easy to obtain funding for more research to support the theory, if it
    > is not too far from existing paradigms, if it is supported by
    > reference to the 'big thinkers' who are regarded as authorities within
    > the research tradition, and if the author has a high position and is
    > good at boosting his prestige by mastering a sophisticated vocabulary.
    >
    > These claims are inflicting a hard blow to many research traditions.
    > In fact, they have so far-reaching consequences for the soft sciences
    > that they have to be tested in a more rigorous way than the research
    > they criticize.
    >
    > Therefore, I want to discuss possible ways to test my claims about
    > selection criteria. I can think of the following methods:
    >
    > 1. Study published articles within the research tradition under
    > scrutiny. The advantage of this method is that it is a natural
    > experiment without interference from the experimenter. The
    > disadvantage is that it doesn't tell which articles have been
    > rejected.
    >
    > 2. Ask journal editors for copies of all articles that have been
    > rejected within a certain time period as well as all referee reports.
    > This may be quite a reliable method if editors will cooperate, but
    > most of the rejected articles will probably turn out to have been
    > rejected for good reasons. Finding the original paradigm-breaking
    > contributions that have been rejected will probably be like finding a
    > needle in a haystack.
    >
    > 3. Interviewing referees and editors. While some referees may admit
    > to ideological bias, few will be able to recognize their own
    > susceptibility to emotional appeal, and probably nobody will admit
    > that they don't support the criterion of falsifiability.
    >
    > 4. Send articles to a number of scientists together with a
    > questionnaire asking how they would judge these articles if they were
    > referees on a journal within their field. The questionnaire could ask
    > the relevant questions to elucidate which criteria have influence on
    > the evaluation of the articles. If the return rate is sufficiently
    > high, this experiment could give sufficient data for a statistical
    > analysis. There is still the bias, though, that the scientists know
    > that they are being monitored.
    >
    > 5. Getting bad articles published. This experiment has already been
    > done excellently by physicist Alan Sokal, who got an article published
    > in a sociology journal with the title: "Transgressing the Boundaries:
    > Toward a transformative hermeneutics of quantum gravity" (see ref.
    > below). This article is pure nonsense and parody, but it supports the
    > ideological agenda of the sociology journal.
    >
    > 6. Getting good articles rejected. This would be the ultimate test
    > for my claim that certain research traditions are rejecting
    > falsifiable theories. But there are big problems with such an
    > experiment: Firstly, it would be very presumptuous to assume that we
    > could write an article so good and original that every journal ought
    > to accept it. Secondly, the article would necessarily deviate
    > significantly from the predominant paradigms, and may be rejected
    > simply for this reason. Those scientists who are deeply involved in an
    > existing paradigm are very unlikely to accept a new paradigm,
    > according to Kuhn. To circumvent this problem, we might make the
    > article resemble an existing paradigm as much as possible, we might
    > present it as interdisciplinary or as a new paradigm, or we may dress
    > it up as an improvement to and revival of an old well-known paradigm
    > that has gone out of fashion. Anyway, we would have to argue with the
    > editor and referees after rejection in order to elicit all arguments
    > for rejecting the article.
    >
    > I already have a proposal for a sociology article which would be
    > suited for experiment 4 or 6. I can't reveal the details here because
    > some journal editor or referee might read this mailing list.
    >
    >
    > Anyway, this is a big research project that I can't do alone. I need
    > your suggestions and help.
    >
    >
    > Best regards
    >
    > Metascience
    >
    >
    > Literature references:
    > Popper, K R: Objective knowledge: an evolutionary approach, Oxford
    > 1972.
    >
    > Kuhn, Thomas S: The structure of scientific revolutions. Univ. of
    > Chicago Press, 1962.
    >
    > Sokal, Alan D: Transgressing the boundaries: Toward a transformative
    > hermeneutics of quantum gravity. Social Text 46/47 vol. 14 no. 1-2
    > 1996 p. 217.
    >
    >
    >
    > =============================================
    > M. Schwartz, Ph.D.
    > Metascience@agner.org
    >
    > ===============================================================
    > This was distributed via the memetics list associated with the
    > Journal of Memetics - Evolutionary Models of Information Transmission
    > For information about the journal and the list (e.g. unsubscribing)
    > see: http://www.cpm.mmu.ac.uk/jom-emit
    >
    >

    ===============================================================
    This was distributed via the memetics list associated with the
    Journal of Memetics - Evolutionary Models of Information Transmission
    For information about the journal and the list (e.g. unsubscribing)
    see: http://www.cpm.mmu.ac.uk/jom-emit



    This archive was generated by hypermail 2b29 : Fri May 04 2001 - 22:33:00 BST