Re: Why are human brains bigger?

From: Joe E. Dees (joedees@bellsouth.net)
Date: Thu May 25 2000 - 00:19:16 BST

  • Next message: TJ Olney: "Shaving"

    Received: by alpheratz.cpm.aca.mmu.ac.uk id AAA00349 (8.6.9/5.3[ref pg@gmsl.co.uk] for cpm.aca.mmu.ac.uk from fmb-majordomo@mmu.ac.uk); Thu, 25 May 2000 00:17:18 +0100
    Message-Id: <200005242315.TAA10204@mail5.lig.bellsouth.net>
    From: "Joe E. Dees" <joedees@bellsouth.net>
    To: memetics@mmu.ac.uk
    Date: Wed, 24 May 2000 18:19:16 -0500
    Content-type: text/plain; charset=US-ASCII
    Content-transfer-encoding: 7BIT
    Subject: Re: Why are human brains bigger?
    In-reply-to: <392C0FFF.F535046D@mediaone.net>
    X-mailer: Pegasus Mail for Win32 (v3.12b)
    Sender: fmb-majordomo@mmu.ac.uk
    Precedence: bulk
    Reply-To: memetics@mmu.ac.uk
    

    Date sent: Wed, 24 May 2000 18:23:11 +0100
    From: chuck <cpalson@mediaone.net>
    To: memetics@mmu.ac.uk
    Subject: Re: Why are human brains bigger?
    Send reply to: memetics@mmu.ac.uk

    >
    >
    > "Joe E. Dees" wrote:
    >
    > > Date sent: Wed, 24 May 2000 15:01:21 +0100
    > > From: chuck <cpalson@mediaone.net>
    > > To: memetics@mmu.ac.uk
    > > Subject: Re: Why are human brains bigger?
    > > Send reply to: memetics@mmu.ac.uk
    > >
    > > Chuck: Pinker uses the
    > > analogy of a computer when he
    > > suggests that consciousness is just a type of monitor. Any PC has
    > > these monitors. It's true that
    > > these monitors are not entirely accurate because their very usage
    > > distorts the event being
    > > monitored, but they are nevertheless accurate enough in the
    > > competition for life's necessities. As
    > > one example, we know that one of the monitors in the upper cortex
    > > that monitors probability of
    > > events after the events have been processed somewhere in the
    > > lower brain distorts the actual
    > > probabilities quite a bit -- which is why only people with split brains
    > > can accurately estimate
    > > the probability of events. Nevertheless, given that the system
    > > needs a monitor, the positive
    > > outcome (rough estimation) evidently outweighs the negative (some
    > > error). PC monitors also have a
    > > certain amount of innaccuracy built in, but they are good enough
    > > the purpose they were designed
    > > for.
    > >
    > > So what I am asking is Since computers have monitors, why not
    > > brains? How does Godel's theory
    > > apply to the monitor function of a computer program? He predicts
    > > there would be some inaccuracies,
    > > but that might be quite tolerable if the advantages are sufficient.
    > >
    > > Joe: This goes to the problem of the necessity of postulating a
    > > "little guy/gal in our heads" whose job it woold be to watch such a
    > > monitor. Of course, such a homunculus would require a smaller
    > > homunculus of its own, etc., and we fall into infinite regress. This
    > > is why Cartesian Dualism fails.
    >
    > No - I don't think the computational theory of mind has to assume this.
    > After all computer has no such little gremlins within gremlins within....
    > etc. I went inside my old one, busted everything apart, wouldn't even go to
    > sleep until the job was finished for fear a gremlin would get away before I
    > saw it; really made a total mess of my living room. ****Not one gremlin***
    > :)
    >
    Yeah, but computers are not conscious either. We have to watch
    the computer metaphor; the brain was not modeled after a
    computer, and computers were not even modeled after the brain.
    In fact, the previous technological cognitive metaphor was to a
    telephone system. We have to accept the human brain as sui
    generis, and grok that three pound marvel on its own terms.
    >
    > Seriously, if a computer can do it, why not a brain. As I understand it,
    > those who posit that such a homunculus is necessary for certain functions
    > are relying on a kind of verbal trick - that something humanlike "behind it
    > all" must be making the decisions, and if so, where do these decisions come
    > from in the first place, etc. etc. But that is not what is meant by the
    > computational theory of mind. Rather, the parts in the piece, if you will,
    > are all performing one extremely simple task, all of which are evaluated
    > by, say, their relative strengths from 0 to whatever. Or in some cases, a
    > simple on/off (which I understand is rare). Just as a simple program in my
    > NT workstation reports on the usage of my processor, so does my
    > consciousness report to me an internal "thought" process.
    >
    But who is that "me" to whom the report is being issued? The
    proverbial and deraded homunculus?
    >
    > And look at the way we describe the process. We ask why the computer isn't
    > printing, and we answer that the computer "thinks" it is a dot matrix and
    > it's really a laser, and we "ask" the computer what it is "thinking" by
    > "bringing up" its "dialogue" (consciosness?) box that "tells" itself what
    > kind of printer it "thinks" it is. Etc. Etc. Those are appropriate
    > metaphors because they are based on information that is being sent around
    > the impulse passageways of the computer -- just like what is happening to
    > information in the brain - something we know from MRIs and such. We don't
    > use the word consciousness only because everything seems conscious to the
    > computer, although we do ask whether or not it "knows".
    >
    We use such metaphors even when they misrepresent their
    objects because that's the way our language works - by metaphors
    ultimately grounded in perceptual and conceptual experience.
    Check out PHILOSOPHY IN THE FLESH by Lakoff & Johnson for
    the elucidation of this view, which is compatible with my
    particularly preferred strain of cognitive psychology (and that of
    most others in the field, such as Damasio, Dennett, Gazzaniga
    and Edelman), emergent materialism.
    >
    > That's a simplistic version, but nevertheless plausible once ramped up to
    > suitable complexity. After all, a computer can be programmed to do the same
    > thing by hooking it into an internal and/or external set of stimuli. All
    > one needs is a complex system of communication of information made up of
    > parts that can change their physical characteristic according to the
    > information they receive. Have you read Dennett on this? Or Pinker?
    >
    The point is that we have no external program for our "software";
    only a DNA scematic for the creation of our "hardware', which must
    subsequently be "programmed" by virtue of the interaction between
    the mind and its environment; this allows individual comsciousness
    to emerge and evolve. Have you heard of Hubert Dreyfus (WHAT
    COMPUTERS CAN'T DO)? He pointed out that computers could
    not beome sentient because they lacked mortal bodies which were
    subject to the vicissitudes of the environment, and that the
    consequences of our choices upon these bodies are what make
    our choices matter to us; it is the source of our meaning.
    Computers contain all this binarily coded data, but none of it
    MEANS anything to them.
    >
    > If so, what do you think is wrong with this? If not, I can send you a
    > scanned copy.
    >
    I have their works. Connectionism is probably part of the solution,
    but it cannot be all of it, as connections without complex and
    dynamically evolving emergent patterns cannot be the source of
    anything resembling self-conscious awareness, signification, or the
    differential valuation of multiple alternatives necessary for the
    manifestation of free choice.
    >
    >
    >
    >
    >
    > ===============================================================
    > This was distributed via the memetics list associated with the
    > Journal of Memetics - Evolutionary Models of Information Transmission
    > For information about the journal and the list (e.g. unsubscribing)
    > see: http://www.cpm.mmu.ac.uk/jom-emit
    >
    >

    ===============================================================
    This was distributed via the memetics list associated with the
    Journal of Memetics - Evolutionary Models of Information Transmission
    For information about the journal and the list (e.g. unsubscribing)
    see: http://www.cpm.mmu.ac.uk/jom-emit



    This archive was generated by hypermail 2b29 : Thu May 25 2000 - 00:17:53 BST