Received: by alpheratz.cpm.aca.mmu.ac.uk id XAA29892 (8.6.9/5.3[ref pg@gmsl.co.uk] for cpm.aca.mmu.ac.uk from fmb-majordomo@mmu.ac.uk); Wed, 24 May 2000 23:25:26 +0100 Message-ID: <392C0FFF.F535046D@mediaone.net> Date: Wed, 24 May 2000 18:23:11 +0100 From: chuck <cpalson@mediaone.net> X-Mailer: Mozilla 4.72 [en] (WinNT; I) X-Accept-Language: en To: memetics@mmu.ac.uk Subject: Re: Why are human brains bigger? References: <200005242138.RAA21744@mail4.lig.bellsouth.net> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: fmb-majordomo@mmu.ac.uk Precedence: bulk Reply-To: memetics@mmu.ac.uk
"Joe E. Dees" wrote:
> Date sent: Wed, 24 May 2000 15:01:21 +0100
> From: chuck <cpalson@mediaone.net>
> To: memetics@mmu.ac.uk
> Subject: Re: Why are human brains bigger?
> Send reply to: memetics@mmu.ac.uk
>
> Chuck: Pinker uses the
> analogy of a computer when he
> suggests that consciousness is just a type of monitor. Any PC has
> these monitors. It's true that
> these monitors are not entirely accurate because their very usage
> distorts the event being
> monitored, but they are nevertheless accurate enough in the
> competition for life's necessities. As
> one example, we know that one of the monitors in the upper cortex
> that monitors probability of
> events after the events have been processed somewhere in the
> lower brain distorts the actual
> probabilities quite a bit -- which is why only people with split brains
> can accurately estimate
> the probability of events. Nevertheless, given that the system
> needs a monitor, the positive
> outcome (rough estimation) evidently outweighs the negative (some
> error). PC monitors also have a
> certain amount of innaccuracy built in, but they are good enough
> the purpose they were designed
> for.
>
> So what I am asking is Since computers have monitors, why not
> brains? How does Godel's theory
> apply to the monitor function of a computer program? He predicts
> there would be some inaccuracies,
> but that might be quite tolerable if the advantages are sufficient.
>
> Joe: This goes to the problem of the necessity of postulating a
> "little guy/gal in our heads" whose job it woold be to watch such a
> monitor. Of course, such a homunculus would require a smaller
> homunculus of its own, etc., and we fall into infinite regress. This
> is why Cartesian Dualism fails.
No - I don't think the computational theory of mind has to assume this.
After all computer has no such little gremlins within gremlins within....
etc. I went inside my old one, busted everything apart, wouldn't even go to
sleep until the job was finished for fear a gremlin would get away before I
saw it; really made a total mess of my living room. ****Not one gremlin***
:)
Seriously, if a computer can do it, why not a brain. As I understand it,
those who posit that such a homunculus is necessary for certain functions
are relying on a kind of verbal trick - that something humanlike "behind it
all" must be making the decisions, and if so, where do these decisions come
from in the first place, etc. etc. But that is not what is meant by the
computational theory of mind. Rather, the parts in the piece, if you will,
are all performing one extremely simple task, all of which are evaluated
by, say, their relative strengths from 0 to whatever. Or in some cases, a
simple on/off (which I understand is rare). Just as a simple program in my
NT workstation reports on the usage of my processor, so does my
consciousness report to me an internal "thought" process.
And look at the way we describe the process. We ask why the computer isn't
printing, and we answer that the computer "thinks" it is a dot matrix and
it's really a laser, and we "ask" the computer what it is "thinking" by
"bringing up" its "dialogue" (consciosness?) box that "tells" itself what
kind of printer it "thinks" it is. Etc. Etc. Those are appropriate
metaphors because they are based on information that is being sent around
the impulse passageways of the computer -- just like what is happening to
information in the brain - something we know from MRIs and such. We don't
use the word consciousness only because everything seems conscious to the
computer, although we do ask whether or not it "knows".
That's a simplistic version, but nevertheless plausible once ramped up to
suitable complexity. After all, a computer can be programmed to do the same
thing by hooking it into an internal and/or external set of stimuli. All
one needs is a complex system of communication of information made up of
parts that can change their physical characteristic according to the
information they receive. Have you read Dennett on this? Or Pinker?
If so, what do you think is wrong with this? If not, I can send you a
scanned copy.
===============================================================
This was distributed via the memetics list associated with the
Journal of Memetics - Evolutionary Models of Information Transmission
For information about the journal and the list (e.g. unsubscribing)
see: http://www.cpm.mmu.ac.uk/jom-emit
This archive was generated by hypermail 2b29 : Wed May 24 2000 - 23:26:03 BST