Memes and associative learning in neurons

Steve (tramont@iinet.net.au)
Tue, 03 Nov 1998 10:32:48 +0800

Message-Id: <3.0.6.32.19981103103248.007935a0@mail.iinet.net.au>
Date: Tue, 03 Nov 1998 10:32:48 +0800
To: memetics@mmu.ac.uk
From: Steve <tramont@iinet.net.au>
Subject: Memes and associative learning in neurons

At 05:55 AM 10/15/98 +0800, I wrote:
>
>I have at hand an article by E. R. Kandel and R. D. Hawkins that was
>published in the 1992 issue of the Scientific American (Mind and Brain).
>They have shown and experimentally demonstrated that a neuron also learns
>by association.

This is incorrect. Kandel and Hawkins' article does not support my
position. I apologise for this oversight on my part. Nonetheless, this does
not alter my original position - which is based in my assumption that
autopoiesis requires individual organisms within a colony (neurons in
brain) to make choices that benefit their ecology (the brain) while
"rewarding" the neurons for their complicity. In order to be able to do
this, I argue, neurons need to be able to associate experiences at some
basic level. And this association of experiences is the key to attributing
meaning to events - hence the relevance of biosemiotics and memetics.

How then, are memes related to the associative properties of cognition? Is
associativity the fundamental principle of which all other cognitive
processes (such as imitation) are manifestations?

composed of 'sub'-memes) are associated together to form higher-level
gestalts of meaning (ie, 'super'-memes). When I use my body to drink a cup
of coffee, I form associations between my hand holding the cup, the
temperature of the cup as sensed by my skin, the visual impact of the
brown, steaming coffee in my bright orange cup, the motion of my hand
holding the cup to my lips, etc. (According to my definition of meme, a
meme is anything that can be conceptualised within the mind of any organism
- thus, my body parts and their actions are also memes) From the September
1992 issue of Scientific American, A. R. and H. Damasio write:
"Concepts are stored in the brain in the form of 'dormant' records. When
these records are reactivated, they can re-create the varied sensations and
actions associated with a particular entity or a category of entities. A
coffee cup, for example, can evoke visual and tactile representations of
its shape, colour, texture and warmth, along with the smell and taste of
the coffee or the path that the hand and the arm take to bring the cup from
the table to the lips. All these representations are re-created in separate
brain regions, but their reconstruction occurs fairly simultaneously."

A sentence that carries with it a specific meaning or conceptualisation is
a meme. And so is each word of which it is comprised. As is each letter of
each word. And so on. The word "red" has its association with blood, red
sunsets, danger, the letter "r" (as we might recall from primary-school
days, "a" is for "apple", "d" is for "dog", "r" is for "red"), etc.

How far can this subdividing of memes continue? How about at least to the
neural level? Neurons specialising in "longness" or "redness" or "motion"
are, I propose, participants in simpler, more essential, neural-level
meme-cultures.

In their work on neural nets and AI, Barto and Jordan proposed an
associative reward-penalty algorithm that would seem to be consistent with
what I am trying to achieve. From Deniz Yuret's web site (A Brief Review
of Memory Research in Cognitive Neuroscience):
"Barto and Jordan proposed a novel learning algorithm for neural networks
that might be biologically more plausible than the standard back
propagation [Barto and Jordan, 1987]. The standard algorithm for learning
in artificial neural networks involves comparing the output of the network
with the desired output, and propagating the gradient of the error
backwards through the connections to make the network gradually approach
the desired setting. This method performs well for using neural networks to
solve artificial problems of classification and recognition, however it is
unreasonable to expect the existence of a ``desired output'' oracle in
nature. The Associative Reward-Penalty algorithm proposed by Barto and
Jordan relies on an approximation of the gradient by individual units
independently, rather than an exact computation that has to be propagated
through the network."
(http://www.ai.mit.edu/people/deniz/html/aiwp343/node3.html)

Associative learning has been central to the work of George Mobus in
developing neural nets. His own model of a neuron incorporates a signal
input representing a "conditionable stimulus", a signal output
(unconditioned response), based on its correlation with a matching input
signal (unconditionable stimulus). His paper, titled " Toward a Theory of
Learning and Representing Causal Inferences in Neural Networks", can be
found at:
http://arthur.cs.wwu.edu/faculty/mobus/Adaptrode/causal_representation.html

Mobus' Adaptrode model (in his own words) "has a number of features in
common with many of the recurrent models of neural networks. The main
difference is that here the recurrent activations, representing memory of
prior activations, are brought down to the level of the synapse rather than
operating at the level of the network."

Clearly, Mobus' neuron is incapable of conceptualisation. Nonetheless, his
model provides a proven, practical simulation that shows that associative
principles, at some rudimentary level void of meaning, works. However, as a
complex, electronic piece of gadgetry, such a neuron can never occur in
nature because it violates natural laws of entropy. And besides, how easy
is it to at least simulate an associative reward-penalty algorithm in a
piece of computer hardware?

The only way that complex colonies of neurons can emerge, I propose, is if
each neuron made choices from its ecology. These choices are subjective,
they are contextual and they are associative. The neuron's interpretation
of the world is achieved through its bodily senses (via its synapses),
hence, it too, has a mind-body relationship. What I am getting at is that
conceptualisation is the stuff of life, its principles are fully general
and applicable at all levels. Perhaps we should consider a more general
definition of memetics, one that applies also at the neural and cellular
levels, one that more realistically reflects an "Associative Reward-Penalty
algorithm" in the spirit of a biological, living entity. That is, there is
such a thing as a neural-level meme.

**Just a brief note on some assumptions I make:
1) There is only one type of learning - and that is by association.
Habituation is not a separate type of learning, but rather, a recursion of
associations. Habituation is the means by which any association attains
continuity over time. All perception/conceptualisation is habitual at some
level - eg, when I am on a fast-moving train and the train stops, I
continue to perceive motion. This is because those neurons specialising in
motion-perception need to desist from their habitualised associations, in
order for me to return to my perception of a stationary world. Perceiving a
cup on a table involves habituation in the visual cortex that ceases when
our gaze shifts. Habituation is consistent with Hebb's rule - that is, the
more that a particular neural connection is used, the stronger that
connection becomes. In terms of habit, this translates to "the more an
association is practised, the more involved neurons become in the
habituation of that association".
2) All conceptualisation is systemic. That is, my personality, each
thought that I think, requires the participation of many neurons within my
neural culture (brain). Analogous to Culture and cultural stereotypes and
the conceptualisations that emerge at the cultural level - eg, technology,
fashion, tradition, etc. Chaotic memetic attractors apply at the
neural-brain level just as they do at the human-cultural level - eg, role
models.
3) Mind-body relationship is an approximation I use that tries to approach
the notion that mind is inseparable from body. Body *is* mind, mind *is* body.
4) The binding problem - I assume that the personality of any organism
that is comprised of a cellular colony (eg, humans), has immediate access
to the neural-level memes in its brain and body.
5) The rationale in this post is presented as simply as possible. Its
implications are inter-disciplinary and considerably more sophisticated
than a brief email would allow - as this brief and incomplete list of prior
assumptions demonstrates.

Stephen Springette

References:
Barto, A. G. and Jordan, M. I. (1987). Gradient following without
back-propagation in layered networks. In Proceedings of the IEEE First
Annual International Conference on Neural Networks, volume 2, pages 629--636.

Mobus, G. E. (1994). Toward a theory of learning and representing causal
inferences in neural networks. In: D.S. Levine and M. Aparicio (Eds),
Neural networks for knowledge representation and inference. Lawrence
Erlbaum Associates, Hillsdale, New Jersey, 1994.

===============================================================
This was distributed via the memetics list associated with the
Journal of Memetics - Evolutionary Models of Information Transmission
For information about the journal and the list (e.g. unsubscribing)
see: http://www.cpm.mmu.ac.uk/jom-emit