Received: by alpheratz.cpm.aca.mmu.ac.uk id NAA02283 (8.6.9/5.3[ref firstname.lastname@example.org] for cpm.aca.mmu.ac.uk from email@example.com); Tue, 8 May 2001 13:44:45 +0100 Message-ID: <2D1C159B783DD211808A006008062D3101745E68@inchna.stir.ac.uk> From: Vincent Campbell <firstname.lastname@example.org> To: "'email@example.com'" <firstname.lastname@example.org> Subject: RE: Selection of scientific theories - metascientific experiment Date: Tue, 8 May 2001 13:41:07 +0100 X-Mailer: Internet Mail Service (5.5.2650.21) Content-Type: text/plain; charset="iso-8859-1" Sender: email@example.com Precedence: bulk Reply-To: firstname.lastname@example.org
Here's that reference-
Mahoney, MJ (1977) 'Publication prejudices: An experimental study of
confirmatory bias in the peer review system', Cognitive Therapy & Research,
Vol.1, pp: 161-75
It's in a book I picked up at the weekend called 'How We Know What Isn't So:
The fallibility of human reason in everyday life', by Thomas Gilovich (1991,
NY: Free Press), which is a very interesting read.
[It's not relevant really, but, I saw this after persistent chastisement
from Robin regarding my knowledge of psychology, and I was browsing in the
psychology section of my university's book shop, and came across this title
addressing some of the things that I think are important in meme
dissemination. Equally, irrelevant, but I do like titles like this. One
day, after Douglas Adams is dead, perhaps, I'd love to write that trilogy of
books 'Where God Went Wrong', 'Some More of God's Greatest Mistakes' and
'Well, That About Wraps It Up For God'... or have I said this already...]
> From: Metascience
> Reply To: email@example.com
> Sent: Saturday, May 5, 2001 8:25 am
> To: firstname.lastname@example.org
> Subject: RE: Selection of scientific theories - metascientific
> At 03-05-01, Vincent Campbell wrote:
> >I think you raise interesting questions here, but I'm not so sure of the
> >ethics of your proposed tests of your hypotheses, particularly trying to
> >trick publishers into either publishing joke articles, or into not
> >publishing "good" articles. What would that really show apart from the
> >gullibility of particular journal staff?
> Experimenting with people always involves ethic problems. But sometimes
> it's the only way to gain knowledge. For example, every new medicine is
> tested on humans despite considerable risks, because the gains are higher
> than the costs.
> In psychology, new therapeutic methods are sometimes applied without any
> testing or systematic evaluation. This is certainly worse, in terms of
> ethics, because harmful therapies may be applied routinely when they
> haven't been tested. This was the case with memory recovery therapy.
> There were ethical problems with Sokal's experiment, but he is defending
> anyway. His fake article has certainly had more impact than any other
> article published in 'Social Text'.
> I am not going to repeat Sokal's joke, because it has already been done
> excellently. I want to do some of the other proposed studies.
> I don't see any ethical problems in submitting a good article to a journal
> and possibly getting it rejected. Neither do I see any problems in sending
> articles to a number of scientists together with a questionnaire.
> >Still, you'd need to have very clear
> >definitions of how you define non-falsifiable and falsifiable research,
> >I think you presume too much in this regard.
> I need help from experts in philosophy of science. But an experiment could
> still be made which avoids any articles that are borderline or dubious
> respect to this classification.
> >Also, arguably -
> > <psychological appeal, politics, ideology, funding, tradition,
> > authority, prestige, and sophisticated terminology>
> >all of this can apply to the hard sciences also. To try and argue that
> >they're ok, and everyone else is problematic, and demonstrably so, seems
> >rather specious in fact.
> I am not claiming that the hard sciences are perfect. Probably, no science
> is free of irrelevant influence.
> The high pressure to 'publish or perish' has tempted many scientists to
> fake results.
> Research in medicine and technology is to an increasing degree being
> sponsored by commercial interests, and there have been cases where the
> sponsors attempt to suppres the publication of inconvenient results. Also,
> much of this research is going on inside industrial companies which
> very selectively, and mostly in the form of patents.
> There is also the problem that negative findings are seldom published.
> Imagine that twenty scientists are doing each their experiment. They all
> make a statistical test on the results, and by mere chance one of them
> finds a result that lies within the 5% interval of confidence. This result
> gets published as significant, while the 19 negative results don't.
> Thus, there are many mechanisms that may distort science. I just want to
> start my study where I expect the distortion to be highest.
> The purpose of my experiment should be simply to find the selection
> criteria that control the development of a particular research tradition.
> >Of course, the fundamentally interesting point here, is that the methods
> >are proposing are inherently social scientific ones (e.g. surveys and
> >interviews), and thus in a way will be subject to the same kinds of
> >that you want to interrogate.
> Surveys and interviews are perfectly valid scientific methods. I don't
> to trash the social sciences, I want to improve them.
> M. Schwartz, Ph.D.
> This was distributed via the memetics list associated with the
> Journal of Memetics - Evolutionary Models of Information Transmission
> For information about the journal and the list (e.g. unsubscribing)
> see: http://www.cpm.mmu.ac.uk/jom-emit
This was distributed via the memetics list associated with the
Journal of Memetics - Evolutionary Models of Information Transmission
For information about the journal and the list (e.g. unsubscribing)
This archive was generated by hypermail 2b29 : Tue May 08 2001 - 13:48:24 BST