Re: Knowledge, Memes and Sensory Perception

From: Keith Henson (hkhenson@cogeco.ca)
Date: Sun Jan 20 2002 - 20:01:30 GMT

  • Next message: Grant Callaghan: "Re: Islamism"

    Received: by alpheratz.cpm.aca.mmu.ac.uk id UAA10212 (8.6.9/5.3[ref pg@gmsl.co.uk] for cpm.aca.mmu.ac.uk from fmb-majordomo@mmu.ac.uk); Sun, 20 Jan 2002 20:04:09 GMT
    Message-Id: <5.1.0.14.0.20020120144722.03524c40@pop.cogeco.ca>
    X-Sender: hkhenson@pop.cogeco.ca (Unverified)
    X-Mailer: QUALCOMM Windows Eudora Version 5.1
    Date: Sun, 20 Jan 2002 15:01:30 -0500
    To: memetics@mmu.ac.uk
    From: Keith Henson <hkhenson@cogeco.ca>
    Subject: Re: Knowledge, Memes and Sensory Perception
    In-Reply-To: <LAW2-F53UO6g7Yusjzp0000772e@hotmail.com>
    Content-Type: text/plain; charset="us-ascii"; format=flowed
    Sender: fmb-majordomo@mmu.ac.uk
    Precedence: bulk
    Reply-To: memetics@mmu.ac.uk
    

    At 07:05 AM 20/01/02 -0800, you wrote:
    >>Did you ever read Michael Gazzaniga's early work, in which he
    >>mentions something which he called the "confabulator" which performs
    >>exactly that function. It generates plausible explanations that we
    >>use to fill in the blanks when we don't know the answers. I haven't
    >>read any of his later stuff, maybe he has refined/renamed the concept.
    >>
    >>frankie

    I quoted the heart of Gazzaniga's excellent work on the "confabulator" in
    Memes Meta-Memes and Politics, an article I wrote in the late 80s. The
    rest of the article can be found here:

    http://www.evolutionzone.com/kulturezone/memetics/henson.memes.metamemes.and.politics.

    In some places, especially where evolutionary psychology can be applied, my
    thinking has moved on from this article, but most of it is still
    valid. Keith Henson

    ***********************

        Some mental agents are "wired in". The most obvious ones pull our
    hands back from hot things. Others are not so obvious, but one which has
    considerable study is often called "the inference engine." Split brain
    research has established it to be physically located in the left brain of
    most people, close to or overlapping the speech area. This module seems
    to be the source of inferences that organize the world into a consistent
    whole. The same hardware seems to judge externally presented memes for
    plausibility. This piece of mental hardware is, at the same time, the
    wellspring of advances, and the source of vast error.

    ---------------------
    Footnote----- *The new
    models even offer an explanation for that difficult problem, the origin
    of consciousness. Each agent is too simple to be conscious, but
    consciousness incidentally emerges as a property of the interconnections
    of these agents.
    --------------------------

    In Society of Mind, Marvin Minsky uses the analogy that
    consciousness emerges from non-conscious elements just as the property of
    confinement emerges from six properly arranged boards, none of which (by
    itself) has any property of confinement. (And you thought Ids and Egos
    were complicated.)

         Being able to infer, that is to find new relations in the way the
    world is organized, and being able to learn inferences from others must
    rank among our most useful abilities. Unfortunately, outputs of this
    piece of mental hardware are all too often of National Enquirer quality.
    Unless reined in by hard-to-learn mental skills, this part of our minds
    can lead us into disaster. Experiments detailing the kinds of serious
    errors this mental module makes can be found in Human Inference by
    Nesbitt and Ross and in The Social Brain by Michael Gazzaniga.

        (Sidebar) *****************************************

    Gazzaniga demonstrated the activity of the inference engine module with
    some very clever experiments on split brain patients. By the module
    failing, we can clearly see how it is doing the best it can with
    insufficient data.

         What Gazzaniga did is to present each side of the brain with a simple
    conceptual problem. The left side saw a picture of a claw, and the right
    side saw a picture of a snow scene. A variety of cards was place in
    front of the patient who was asked to pick the card which went with what
    he saw. The correct answer for the left hemisphere was a picture of a
    chicken. For the right half-brain it was a show shovel.

         "After the two pictures are flashed to each half-brain, the subjects
         are required to point to the answers. A typical response is that of
         P.S., who pointed to the chicken with his right hand and the shovel
         with the left. After his response I asked him 'Paul, why did you do
         that?' Paul looked up and without a moment's hesitation said from
         his left hemisphere, 'Oh, that's easy. The chicken claw goes with
         the chicken and you need a shovel to clean out the chicken shed.'

         "Here was the left half-brain having to explain why the left hand was
         pointing to a shovel when the only picture it saw was a claw. The
         left brain is not privy to what the right brain saw because of the
         brain's disconnection. Yet the patents's own body was doing
         something. Why was it doing that? Why was the left hand pointing to
         the shovel? The left-brain's cognitive system needed a theory and
         instantly supplied one that made sense given the information it had
         on this particular task . . . ."

        The inference engine was a milestone in our evolution. It works far
    more often than it fails. But as you can see from the example, the
    inference engines will wring blood from a stone; you can count on its
    finding causal relations whether they exist or not. Worse yet, the
    inference engine probably can't detect when it doesn't have enough data.
    Even if it could, it has no way to tell that to the verbal (conscious)
    self.

    (end sidebar) *********************************************

         There are both genetic and memetic controls on the dangerous beliefs
    that arise in this module, though they don't always work. I can't point
    to genes for skepticism but (provided it did not interfere too much with
    necessary learning) this characteristic would be of considerable survival
    advantage. Being entirely uncritical of the memes you are exposed to can
    be a fatal trait, or it can result in reduced (or no) fertility. The
    classic example of a genetically fatal belief is the Shaker religion,
    but intense involvement with a wide variety of memes (or derived social
    movements) statistically results in fewer children.

    [I can post the rest in 3-4 chunks if there are no objections]

    ===============================================================
    This was distributed via the memetics list associated with the
    Journal of Memetics - Evolutionary Models of Information Transmission
    For information about the journal and the list (e.g. unsubscribing)
    see: http://www.cpm.mmu.ac.uk/jom-emit



    This archive was generated by hypermail 2b29 : Sun Jan 20 2002 - 20:18:05 GMT