Re: Why are human brains bigger?

From: chuck (cpalson@mediaone.net)
Date: Wed May 24 2000 - 15:01:21 BST

  • Next message: chuck: "Re: What is "useful"; what is "survival""

    Received: by alpheratz.cpm.aca.mmu.ac.uk id UAA28432 (8.6.9/5.3[ref pg@gmsl.co.uk] for cpm.aca.mmu.ac.uk from fmb-majordomo@mmu.ac.uk); Wed, 24 May 2000 20:03:37 +0100
    Message-ID: <392BE0B0.50A34152@mediaone.net>
    Date: Wed, 24 May 2000 15:01:21 +0100
    From: chuck <cpalson@mediaone.net>
    X-Mailer: Mozilla 4.72 [en] (WinNT; I)
    X-Accept-Language: en
    To: memetics@mmu.ac.uk
    Subject: Re: Why are human brains bigger?
    References: <200005240215.WAA19363@mail2.lig.bellsouth.net>
    Content-Type: text/plain; charset=us-ascii
    Content-Transfer-Encoding: 7bit
    Sender: fmb-majordomo@mmu.ac.uk
    Precedence: bulk
    Reply-To: memetics@mmu.ac.uk
    

    Joe -
    As I said in an earlier posting, I am reading the paper you posted, and I am finding it very
    interesting. For example, I had never seen the distinction of making tools that are mere
    extensions of self and making tools that make tools. I find that a convincing and useful
    distinction that might go a long way. I wish I had known about it before.

    But I am so far uncomfortable with the notion of applying Godel's logical notions of logical
    systems to an understanding of consciousness. Pinker uses the analogy of a computer when he
    suggests that consciousness is just a type of monitor. Any PC has these monitors. It's true that
    these monitors are not entirely accurate because their very usage distorts the event being
    monitored, but they are nevertheless accurate enough in the competition for life's necessities. As
    one example, we know that one of the monitors in the upper cortex that monitors probability of
    events after the events have been processed somewhere in the lower brain distorts the actual
    probabilities quite a bit -- which is why only people with split brains can accurately estimate
    the probability of events. Nevertheless, given that the system needs a monitor, the positive
    outcome (rough estimation) evidently outweighs the negative (some error). PC monitors also have a
    certain amount of innaccuracy built in, but they are good enough the purpose they were designed
    for.

    So what I am asking is Since computers have monitors, why not brains? How does Godel's theory
    apply to the monitor function of a computer program? He predicts there would be some inaccuracies,
    but that might be quite tolerable if the advantages are sufficient.

    If this or any question is better left until after I read your entire article, than please
    indicate that. These are just some immediate thoughts - I'll be more complete later.

    "Joe E. Dees" wrote:

    > Date sent: Mon, 22 May 2000 21:49:01 +0100
    > From: chuck <cpalson@mediaone.net>
    > To: memetics@mmu.ac.uk
    > Subject: Re: Why are human brains bigger?
    > Send reply to: memetics@mmu.ac.uk
    >
    > >
    > >
    > > "Joe E. Dees" wrote:
    > >
    > > > Date sent: Mon, 22 May 2000 18:37:48 +0100
    > > > From: chuck <cpalson@mediaone.net>
    > > > To: memetics@mmu.ac.uk
    > > > Subject: Re: Why are human brains bigger?
    > > > Send reply to: memetics@mmu.ac.uk
    > > >
    > > > >
    > > > >
    > > > > "Joe E. Dees" wrote:
    > > > >
    > > > > > Date sent: Mon, 22 May 2000 12:30:41 +0100
    > > > > > From: chuck <cpalson@mediaone.net>
    > > > > > To: memetics@mmu.ac.uk
    > > > > > Subject: Re: Why are human brains bigger?
    > > > > > Send reply to: memetics@mmu.ac.uk
    > > > > >
    > >
    > > Thanks for your thoughtful reply. I have some comments on it below.
    > >
    > > >
    > > > > >
    > > > > > > In sum, I am arguing that there has to be a monitoring mechanism that compares
    > > > > > > and calculates our own individual interests and how that must wedged somehow
    > > > > > > into cooperative activities.
    > > > > > >
    > > > > > But except for the higher apes (chimpanzees, bonobos, orangutans
    > > > > > and gorillas), only humans can pass the mirror test of self-
    > > > > > recognition (Social Cognition and the Acquisition of Self, Lewis and
    > > > > > Brooks-Gunn, 1972), where the subjects are placed around mirrors
    > > > > > until they are familiar with them, then a dab of red paint is placed
    > > > > > upon their noses, and they are shown their mirror reflections.
    > > > > > Lesser apes and other animals, attend to the paint on the reflected
    > > > > > nose, treating the reflaction as an conspecific (an other of their own
    > > > > > species), while adult apes, some human children past the age of
    > > > > > 15 months, and all (except mentally challenged) human children
    > > > > > past the age of 2 years reach for their own noses, demonstrating
    > > > > > their understanding that the reflection is a reflection of themselves;
    > > > > > a concept of self is necessary to such self-recognition. This test is
    > > > > > a perceptual one, and takes place under the radar screen of and
    > > > > > free from any interference from the semiotic constraints of human
    > > > > > or animal communication forms.
    > > > >
    > > > > Joe -
    > > > > I'm glad you brought this experiment up, because I have been thinking about it for the
    > > > > last few months. I must say that I so far take the side of Pinker on this; I don't
    > > > > think it necessarily shows anything about self consciousness.
    > > > >
    > > > I think that it conclusively demonstrates that the size/complexity
    > > > quotient of lesser mammalian brains does not breach the godelian
    > > > barrier beyond which recursivity permits the emergence of self-
    > > > referentiality, hence self-consciousness, and that the brains of the
    > > > great apes and of humans do indeed surpass that threshhold.
    > >
    > > All I can say is, "Sure, if you say so." The problem is, I don't know what godelian barrier
    > > or recursivity means, much less how they are relevant. Would you like to elaborate or at
    > > least give me a reference?
    > >
    > > Also, it does seem relevant here that the brains of social species are indeed larger - which
    > > would seem to go along with what both of us are saying.
    > >
    > Godel's Incompleteness Theorem is perhaps the most significant
    > mathematical proof of the 20th century. He proves that any
    > system of sufficient complexity to permit recursion or self-reference
    > is necessarily either incomplete or in some place incorrect. It is
    > breathtakingly simple, and here is a linguistic synopsis.
    >
    > First, let us postualte axiomatic system A. All true statements
    > reside within A, and only true statements are found there. Now, let
    > us construct statement B. Statement B is a recursive or self-
    > referential statement; it talks about itself, and what it says is "B is
    > not an axiom of A." What has happened here? If we place B
    > within A, then A contains the false statement that "B is not an
    > axiom of A", but if we exclude A from B, then the statement that "B
    > is not an axiom of A" is rendered true, and A does not comtain all
    > true statements. B belongs either (neither inside nor outside A) or
    > (both inside and outside A), and the paradox is unresolveable
    > within axiomatic system A. In other words, B is undecideable, and
    > the bottom falls out; mathematics is revealed as a Zen Koan.
    >
    > What does this heve to do with us? Conscious self-awareness is recursive
    > and self-referential; it is consciousness of being conscious. Since
    > we possess it, our brains, as physical instantiations of interrelated
    > and systemic logical structures, have breached the Godelian
    > complexity barrier. In this sense, we are both not and not not the
    > world we perceive and in which we act, kinda like the Zen answer
    > neti, neti (not this, not that). We are neither seamlessly blended
    > with it nor nonrelationally bifurcated from it; our relationship with our
    > environs constitutes a system, beneath or beyond the categories of
    > unity and multiplicity.
    > > > >
    > > > >Are we to draw from this
    > > > > experiment that because of an added visual marker the lesson that lesser apes and
    > > > > other animals don't have a sense that their inner states of, say, readiness to do
    > > > > something, are different? First, we don't know just how different that little spot
    > > > > makes the image.
    > > > >
    > > > In either case. enough to detect it, for the spot is pointed to in any
    > > > case, either in the reflection or on the self. To claim that it is not
    > > > noticed
    > >
    > > I am not arguing that it is not noticed - to the contrary. I am arguing that it could have
    > > much more salience than our intuition would indicate.
    > >
    > But what kind of salience besides self-reference would explain the
    > differential reactions experimentally registered?
    > >
    > > > is to ignore the different but in each case existent
    > > > behaviors exhibited towards it, by both the animals who consider it
    > > > to be placed on a conspecific, and those animals and humans who
    > > > realize that it has been placed upon themselves.
    > >
    > > > > What seems like a tiny distinction to us might appear huge to
    > > > them. I
    > > > > seem to remember vaguely how this kind of thing is a common feature of ethology
    > > > > studies of recognition of others in the species.
    > > > >
    > > > That's exactly the thing. These lesser apes are recognizing those
    > > > reflections as conspecifics and behaving towards them in
    > > > instinctually circumscribed ways (for instance, baboons attacked
    > > > their reflections). They are not recognizing them as reflections of
    > > > themselves.
    > >
    > > Exactly - because "themselves" is defined according to the salience of certain
    > > characteristics. For example, lets do a thought experiment. We invent a kind of digital
    > > mirror that can represent us as a mirror does or dressed in all kinds of fantastic costumes.
    > > Even some of our species might try to attack these latter images that seem to only want to
    > > perfectly imitate us. That little spot on the nose might have much more salience than it
    > > would appear to us.
    > >
    > I would like to see an experiment done where the movements on a
    > screen were the same as those of the subject, but the form was
    > different. This experiment was not possible in 1972, when the
    > study was conducted, since we lacked the enabling technology
    > which we now possess. However, they did have screens playing
    > tapes of conspecifics, and even of the subjects themselves,
    > performing different motions as a control. The addition which you
    > suggest (and of which I had previously thought) would indeed
    > logically complete the ensemble.
    > > > >
    > > > >Second, the lack of this ability
    > > > > doesn't seem likely to me.
    > > > >
    > > > Those who, can recognize themselves in a mirror can still
    > > > recognize others. It's not a matter of "instead of", but of "in
    > > > addition to."
    > > > >
    > > > >Third, I would say that the best way to find the smoking
    > > > > gun on this one would be to actually research the action of the brain itself with MRIs
    > > > > and other tools.
    > > > >
    > > > I agree that further corroboration is always a useful thing.
    > > > >
    > > > >I would be quite surprised to find that animals don't have some
    > > > sense
    > > > > of self.
    > > > >
    > > > But an explicit and distinct self-identity? If you think that all
    > > > animals possess this, you WILL eventually be quite surprised.
    > >
    > > You actually want to say they can't distinguish themselves from others and don't have self
    > > monitors? I could say the same - YOU will be surprised. :) I think that you might have
    > > trouble with this idea because you immediately think of the concept of self-identity in
    > > human terms - language and all.
    > >
    > How far down the animal chain are you willing to go? Rats and
    > shrews, for instance? Fish? Clams?

    I wish I knew. To my knowledge clams are not social. But still, it is certainly consistent with
    the fact that social animals tend to have larger brains. Rats are social, so why not?

    >
    > > > >
    > > > > In the mean time, perhaps you could give an alternate explanation of how social
    > > > > animals calculate social behavior. I wonder if the ability to have empathy - so strong
    > > > > in humans - could play a role. It seems to me that that is an important way in which
    > > > > we interpret the motivations of others. And I wonder if empathy emerged because it is
    > > > > more effcient than a program that relies on hard coded stimulus/response. Or perhaps
    > > > > empathy is a better way to detect cheaters. I'd like your feedback with anything you
    > > > > have to say on the subject.
    > > > >
    > > > I believe that a lot of such behavior is instinctual and innate; after
    > > > all, different species manifest differing social behaviors.
    > >
    > > Of course - but what is the nature of the instinct. Pinker says that the reason we are so
    > > flexible is not that we have less instincts, but that we have more. So what are the
    > > instinctual components of social behavior.
    > >
    > This is indeed a hot cognitive debate (canalization vs. flexibility),
    > but I see self-consciousness as the final programming, for with it
    > we are basically programmed to be able to transcend our
    > programming. Self-consciousness is the basis for both freedom of
    > choice and the ability to create signification.
    > >
    > > > It can only
    > > > emerge, however, when the ground conditions are met, which of
    > > > course includes the presence of conspecifics.
    > >
    > > I would add here that survival depends on the ability to cooperate.
    > >
    > Cooperation and competition are co-primordial necessities. An
    > interesting game-theoretical approach is to be found in THE
    > EVOLUTION OF COOPERATION, by Robert M. Axelrod.
    > >
    > > > Much of it is
    > > > learned (or at least the latent instinctual capacities are actualized)
    > > > through play behavior and parental nurturing. Remember that
    > > > empathy can perhaps develop prior to self-conscious awareness,
    > > > since other-permanence towards the caregiver develops before both
    > > > self-permanence and object permanence, which develop together in
    > > > the human child.
    > >
    > > I'm not sure. I would think that empathy depends on an already stable platform of self.
    > >
    > I am presenting the Piagetian model (which was also the context in
    > which Lewis and Brooks-Gunn pursued their studies). Further
    > study in the area would be a good thing.
    > >
    > >
    > > ===============================================================
    > > This was distributed via the memetics list associated with the
    > > Journal of Memetics - Evolutionary Models of Information Transmission
    > > For information about the journal and the list (e.g. unsubscribing)
    > > see: http://www.cpm.mmu.ac.uk/jom-emit
    > >
    > >
    >
    > ===============================================================
    > This was distributed via the memetics list associated with the
    > Journal of Memetics - Evolutionary Models of Information Transmission
    > For information about the journal and the list (e.g. unsubscribing)
    > see: http://www.cpm.mmu.ac.uk/jom-emit

    ===============================================================
    This was distributed via the memetics list associated with the
    Journal of Memetics - Evolutionary Models of Information Transmission
    For information about the journal and the list (e.g. unsubscribing)
    see: http://www.cpm.mmu.ac.uk/jom-emit



    This archive was generated by hypermail 2b29 : Wed May 24 2000 - 20:04:14 BST