Being a Human Being, Memetics and Complexity Science

Robin Wood (
Tue, 24 Jun 1997 19:10:11 +0100

From: Robin Wood <>
To: "''" <>
Subject: Being a Human Being, Memetics and Complexity Science
Date: Tue, 24 Jun 1997 19:10:11 +0100

Dear Fellow Human Beings

I am writing this in the spirit of one who has been trying to
synthesise cognitive science, complexity science, ontology and
phenomenology into something practical that we can use to make this
little old world a better place (and where better to start with those
global/multinational clients of my firm?)

What follows are my reflections on the first seminar we ran here in
London including Dr If Price, Arthur Battram, John Farago, Giles
Taylor and a few others. I appreciate it is a little unstructured, but
it should address some of the following issues:
q Human beings are primarily energy fields patterned around several
levels of attractor: atomic, molecular, cellular, organ, organ
systems, hypothalamic limbic, immune, and finally sympathetic and
parasympathetic nervous systems. Each attractor exerts a powerful
"pull", but through "co-operation" and co-evolution between the
different levels we get (more often than not) intelligent life
q Humans and animals relate to each other in many modes- language
accounts for somewhere between ten to twenty percent of signals sent
and received. We can go from feeling each other's "vibes" ( human
energy fields), to having a shared experience in a multimedia medium
which does not require any words at all. Sometimes the latter are the
most profound experience. Even poetry, to move us, needs imaginative
imagery to stir our souls.
q We are pattern recognising animals- patterns are, as Alex Brown
points, ubiquitous and all around us in many shapes and forms. Again,
language is the exception rather than the rule in our day-today
environment. (Yes, Mario, science needs language- but most of the
breakthroughs have been done by teams having shared experiences often
prompted by creative, non-verbal stimuli e.g. the 3d model built by
Crick and Watson was the point of breakthrough in double helix land-
Steven Hawking thinks in 3d images in his head). They say a picture is
worth a thousand words- 3d must be worth 100 000+ words?
q We not only embody and recognise/create patterns, we also replicate
them. But what is a meme (beyond merely a Dawkins "memory gene")?. I
agree with If- we need some FAQ's and glossary which can serve as our
start point in these conversations (without stifling them). Perhaps we
can use it as shorthand for "a unit of 'something' that human beings
replicate in the process of communication and culture building". We
all create reality in our heads (Husserl's fundierung- thanks Mike!)-
personally and socially constructed reality is a synthesis of memes,
which are all evolving and co-evolving, just as the carriers and
creators of memes are doing the same. It is this interaction which is
so interesting.
q Organisations are energetic and memetic systems which construct and
shape reality and the real world, transforming less useful inputs or
abundant resources into more useful outputs and scarcer resources.
They can learn, become intelligent/conscious, and evolve. This is an
incredibly fascinating process, which has become my life's work- how
do we transform and renew them so that they are beneficial forces in
q Hans-Cees makes a good point- meaning is a property of the semantic
network, not the nodes or connections, as all concepts are context
dependant. So where then is the meme? (As we may ask about where the
gene is when the genome is continuous). Is a meme a unit of meaning
which is capable of being embodied, communicated or making a
difference? I.e. something which is context dependant. This would make
much sense given what we are learning about personally and socially
constructed reality and the nature of meaning and knowledge.

Well, then, here are the reflections. Enjoy!

Dr Robin L Wood

P.S. We now have the new seminar brochures released onto the web at:

Personal Learning Review from "Applying the New SCIENCES in
Organisations" Seminar 1- May 1997

Following the seminar, I thought it might be worth doing a personal
learning review of how what we did together made sense to me. I
realise that this is only my own sense-making pattern, and I therefore
submit it to you, my fellow travellers, as something which might help
in interpreting the maps we developed at the seminar, copies of which
are attached.
1. The Principles of Complexity
To me what lies at the heart of CAS theory is the adaptive agent
nested in recursive sets of other CAS which are themselves adaptive
agents. Holland deals with the fundamental properties of such adaptive
agent's and their co-evolution in "Hidden Order". On reviewing our
seminar output in the light of this book it appears that an area of
substantial confusion lies in the questions:
q What is the nature of the internal models which adaptive agent's
develop through their co-evolution?
q Does it matter whether human beings (or any other adaptive agent)
are aware of the content of such models?
q What do memes have to do with internal models? Are they components?
Holland deals with the internal models as strings of Boolean logic
which can combine and recombine, through the mechanisms of tagging and
building blocks. The units which combine and recombine are the
building blocks, and the way in which tagging works is to enable
adaptive agent's to filter, specialise and cooperate. Specialisation
and cooperation lead to the emergence of meta-agents, meta-meta-agents
and so on.
Tags delimit the network of possible interactions between nodes
(adaptive agents and meta adaptive agents), making aggregation of
adaptive agent's possible. Flows of resources (energy , matter and
information) take place between nodes across connectors according to
the tagging built into the internal models i.e. internal models evolve
to anticipate those familiar patterns out there which enable the
adaptive agents to secure the appropriate resources necessary for
their survival. Efficient networks conserve and recycle resources and
thus accumulate them as wealth which can then be used to interact with
other networks (trade). Adaptive agents with useful tags spread while
agents with malfunctioning tags cease to exist. Flows also have
multiplier effects leading to increasing returns between compatible
networks of adaptive agents. Combination and recombination of building
blocks and aggregation together with multiplier effects lead to
diversity and non-linear behaviour in the ecosystem level CAS as a
Building blocks (or "schemata" in artificial life) are the basis for
internal models- they enable implicit and explicit models to spot tags
out there that correspond to the internal model. This is very much the
Maturana point about us being able to see/notice only those entities
our internal models have already modelled, rather than us
perceiving/mirroring an objective reality that is "out there". (We
could say "our models construct our world"- this is more holistic than
attributing that property to language alone). Rules act as competing
hypotheses in internal models and between internal models, which
evolution or cognition can select. This leads us to Dennett's point
about Gregorian, Popperian and Mind tool evolution- selection can take
place either at the level of the adaptive agent as:
q Mode 1: Effector (Organism) (where conditioned, mode 1 behavioural
routines operate at an unconscious level). This is either the source
of great unconscious competence ("Zen Archery") or incompetence,
depending upon the fit between the models in the specialised
processors and the contexts being modelled. ("Skinnerian evolution",
Dennett). We might call this "learning while doing", which works well
where fixed point, limit cycle or torus attractors are inducing
incremental behaviour and feedback. (Losada).
q Mode 2: Generator and Selector of Internal Models (sets of
hypotheses or if/then rules) which enable the simulation of contexts
which might arise and the testing of hypotheses within those contexts.
This Popperian mechanism (Dennett):
q enables the generation of new hypotheses for testing,
q selects out unfit/unworkable hypotheses,
q identifies fit/robust hypotheses, and most importantly,
q teaches us which sets of hypotheses are most appropriate in
particular contexts.
Popperian mechanisms operate best where chaotic attractors (or in
catastrophe theory, cusps and edges) are evident, and the complex
adaptive system being modelled requires different parts of the
possibility space to be mapped in order to develop or test new rules
for survival in each part of possibility space. This represents
"learning before doing", and involves consciousness in global
q Mode 3: User and co-creator of mind tools and learning/playing
environments, where learning to learn and playing to learn act as
accelerators for learning while doing and learning before doing.
Learning/playing environments should also emphasise "learning after
doing", where a safe environment is available to reflect on "Why it
worked" or "Why it did not work". Safe conditions for learning
together with learning tools ("transitional objects" which can either
be tasks, mindtools or toys such as "cases"). Gregorian evolution
Implicit models are invariable rule sets built into the chemical or
neuronal structure of organisms which dictate invariant behaviour- for
example, the bacterium always swims upstream on a glucose gradient,
whereas mammalian internal models are more explicit and enable
"lookahead" functions which can select between alternatives.
The lookahead function is probably the evolutionary reason for
consciousness, which provides a global workspace (Baars) in which
novelty can be handled and combinations and recombinations of models
can be simulated. Specialised processors in themselves create rigid
hierarchical contexts which are as hardwired as they can be, as neural
evolution has selected them for their specific purposes. We thus find
a contention for attention between our specialised processors acting
in the unconscious, driving most unconscious behaviour (supported by
the hypothalamic limbic system, cardiovascular system, organs and
muscles), and this contention can only be resolved through the
co-operation of the specialised processors (acting as adaptive agents
with their own internal models), to create global workspace to suspend
automatic behaviour and trigger thought or reflection
What is fascinating is how the combination and recombination of the
implicit models contained in the specialised processors in the global
workspace, reprograms the specialised processors and their
interactions with one another. This is probably the basis for much of
NLP and other "conditioning" techniques, where visioning, hypnosis and
conscious work raise these specialised processor models into
consciousness, creating a direct programming connection between the
processors and the goal directed activity e.g. stop smoking, lose
weight. Here we reprogram our specialised processors for specific
appetites directly.
In lookahead though, what is more important is that integrated models
are created which connect sub-models in specialised processors into
behavioural routines. This is where "what-if" neurobiology becomes so
powerful- the fore-brain as described by Ingvar's research is
continuously generating such routines from sub-routines accumulated in
specialised processor memories. Although all internal models as
described by Holland are anticipatory (i.e. they "know" which tags
lead to which desirable resources and beneficial/harmful interactions
at the most basic level), this kind of anticipation is rudimentary.
The ability to anticipate the variables which result in more
beneficial outcomes for the modelling adaptive agent in a future state
of the CAS being modelled is much more precious.
But of course, no adaptive agent can predict the future-
collectivities of adaptive agents can cooperate to co-produce specific
kinds of futures, providing their internal models are congruent and
are sufficiently rich (requisite variety) and precise (clear tags) to
model the future state of the CAS relatively accurately. The degree to
which adaptive agents and aggregates/ networks of adaptive agents can
control or influence the exogenous variables in the CAS they are
operating in is also likely to be critical in determining whether the
adaptive agents can bring about the desirable future state of their
particular CAS their models "desire".
This brings us around to the conversations we had about mode 1 and
mode 2 behaviour, and view 1/ view 2 perspectives on the "CAS in
focus". My definition of mode 1 as being driven by specialised
processors operating unconsciously now reveals that the internal
models of the networks of adaptive agents which are producing this
behaviour are in control- until such time as they receive dissonant
feedback. Cognitive dissonance (Festinger) triggers an specialised
processors contention for airtime in global workspace. This
specialised processors broadcast on all frequencies (Wow- that hurt!!
You freeze, the tears well up in your eyes- damn, I must not hit my
thumb with that hammer again! Wait until pain (broadcast at a
fundamental physiological level on all major frequencies) subsides.
Your hammering specialised processors "try harder" (reprogram) to miss
your thumb- and your other specialised processors may also slow the
whole operation down to help your hammering specialised processors to
do a better job- they are very cooperative and sensitive).
Mode 1 thus needs global workspace to learn. Global workspace,
however, can be triggered by anticipatory specialised
processors/behavioural routines to engage in learning before doing.
Instead of hammering a nail into the wall, we are about to go on a
business trip to Brazil. What clothes shall I take (think- southern
hemisphere- opposite of London- therefore autumn- but what is the
temperature? Humidity?). I go to sleep and leave packing until
breakfast. I am reading FT and notice there is a global weather chart-
primed by my specialised processors responsible for the Brazilian
weather forecast collaborating with my clothes-packing specialised
processors). I pack my warm suits as it is cold, but take one light
outfit just in case it warms up (my general weather forecasting
specialised processors have memories of great variability in
climates). Now I have a new routine (specialised processors) which is
"When booking airtickets, look at foreign weather forecasts" (the
trigger for those specialised processors).
Mode 2 is more than simply having specialised processors
collaboratively triggering the emergence of global workspace- for me
it is about managing brainstates. The emergent properties of activated
networks of neurons (i.e. the adaptive agent substrate of specialised
processors self-organising into the functionality of specialised
processor behaviour where the consistency of that behaviour points to
an internal model) are brainwaves. Brainwaves are great, because we
can see their output on EEG's live on computer screens and get
immediate feedback on the effectiveness of our brainstates. NLP and
many other disciplines also point to the criticality of bodymind
states in memory and behaviour- memory is state dependant. The latest
cognitive science is confirming the suspiscions of many such as
Pribram that memory is holographically encoded. Excitation patterns in
neural networks trigger specific properties of networks of neurons
and cells called memories. So internal models are not only held in
neural networks themselves- they are also dependant on the functioning
of the electro-magnetic waves produced by neuronal excitation.
Electroshock "therapy", Human Energy Field research and much of the
leading edge research is now examining the properties of these waves.
But let us return to mode 2- what it is all about is developing a
deeper capability to find the appropriate brainstate for a specific
situation or activity. Not only: "How do I think about this?" or "What
do I think about this", but "How can I get myself into the most
productive state to think about/deal with this?" In mode 2 we are
able to call into global workspace the relevant material from memory
along with the associated functionality of the specialised processors
which produced that material. In other words, capability to act
co-exists alongside the memory of the context for action.
This leads me to conclude that all memory is actually an emergent
phenomenon, a result of the interaction between specialised processors
whose function and memory cannot be separated. This is why global
workspace emerges so frequently when dealing with complex situations
and problems- the need for novelty to generate new patterns which are
capable of transcending and integrating previously unconnected
contexts and functions.
This also links very strongly into developmental processes (Wilbur),
where a "self" emerges through the process of differentiating itself
from its former self, and then transcending the former self so that it
can operate on the former self using its newly differentiated
capabilities and tools. For example, we develop our ego self and sense
of membership when we have been able to develop language. Language
then acts as a tool which we can use to operate on our more basic
selves. For example, inner dialogue acts to distinguish the parent,
adult and child modes via the kind of language and tone being used.
This in turn forms the basis for rudimentary self-control and attempts
to be more effective in engaging with the world. Although this is only
the third level of human development out of ten or more, the process
by which language becomes a way of differentiating a new self and
enabling us to operate on or develop other selves is a powerful
example of the developmental process at work in all ten stages of
A complex society is likely to develop many fragmented, unconnected
internal models represented in the specialised processors of billions
of adaptive agents (people) which go unexamined beneath the surface of
consciousness forever. This explains why (de Geus):
q We are sometimes "stupid" or make mistakes (no feedback leads to
specialised processor "overconfidence" and lack of congruence with
actual results leading to further deviation and eventual crash if/when
breakdown occurs)
q We can only see when a crisis opens our eyes (pain of crisis,
whether sore thumb from mistaken hammer blow to business
failure/divorce etc alerts specialised processors/unconscious routines
of "error" in internal models and consequent behaviour)
q We can only see what we have already experienced (specialised
processors only have internal models based on learning from actual
experience so they are not very good at handling the unfamiliar)
q We cannot see/ignore what is emotionally difficult to see (cognitive
dissonance/fear of pain lead specialised processors to damp
feedback/slow alert of global workspace to signals)
q We can only see what is relevant to our view of the future
(anticipatory models pick up signals/information relevant to our
models of the future, and ignore)
Let me now link this all back to what we developed on the left hand
side of our matrix as "Complexity Principles" (I will take them in a
slightly different order to that in which they were generated):
Principle 1: "Seeing Patterns"
For me this was the fundamental complexity principle we generated.
Adaptive agents are pattern holders and generators, as well as
detectors. The internal models and rule sets in those models held by
adaptive agents are the key to understanding both genetic and memetic
evolution. We included in this the three clusters: "Great Metaphor
Set- Living System Dynamics", "Re-interpreting our Reality" and "When
is unconscious competence not good enough?" In no particular order, we
were picking up here the rule dimension- evolutionary algorithms,
memes, lock-in, Axelrod, Dawkins, Dennett, adaptive systems design
principles, human beings as complex pattern recognisers and
complexifiers, complexity of language and worldviews and what do we
attend to?
So, here, in Holland terms we have internal models and tagging
together with the mechanisms by which these models generate behaviour
and interact with other adaptive agents.
Principle 2: "Acting consciously (Having the Choice)"
What I think we were picking up here was the choice between mode 1 and
mode 2, and knowing when to invoke mode 3 (user/selector of mindtools
and learning environments). Clusters such as "Complexity thinking",
"Change is a function of the observer in the system", "effective
languaging", "being a human (while) being a change agent" and "making
a difference" represented the different ways in which being able to
recognise and generate different patterns of behaviour and internal
models could lead to more appropriate/better outcomes. Having the
choice (acting consciously) means we can choose to see different
patterns and use mode 1, mode 2 or mode 3 as required, without being
locked in to an exclusive mode
Under this principle we can also pick up on the distinction between
view 1 and view 2. View 1 is that of the "Independent Observer", where
we stand back from situation or the phenomenon we observe, to enable
us to be objective about it. This is the classic scientific method,
where we do not want our interaction with the thing/system being
observed to interfere with our observation. Now, this works for
studying artificial life and other computer models and systems,
engineering artefacts, solar eclipses, plants, ecosystems and even (to
a lesser extent) "primitive" tribes and "mad" individuals. We can
create conditions in the laboratory or even in the field where we
distinguish our own actions, desires, needs, projections and
interpretations from what is actually going on. View 1 works well
q Stable, repeatable phenomena- where the "observer" position enables
us to see most clearly the underlying rules at work.
q Situations where we have no control or influence over the outcome
i.e. you really are an observer, and there is no way you can "get
into" the situation e.g. studying deep space or watching a video of
the war in Bosnia.
q Situations where we are indifferent to the outcome e.g. for me,
whether Kasparov wins or loses to Deeper Blue- who cares, they are
both machines anyway!
The basic principle underlying view 1 is that those situations or
phenomena where our internal models find the situation or phenomenon
strange, boring/predictable or irrelevant, need us to adopt or shift
us into a different frame for viewing as an observer. Detachment is
the closest word I can find to describe what this feels like.
View 2 is that of the "Interactor/Doer", and works well for:
q Complex emergent phenomena as found in physics, biology,
q Complex adaptive systems where we as adaptive agents are close to or
involved in the action, concerned about the outcome or familiar with
the situation.
Involvement, commitment and caring describe what it feels like in view
2. Here our internal models are telling us we need to immerse
ourselves in the phenomenon or situation because we need to affect the
There appears to be a paradox here- describing view 2 seems to get us
closer to unconscious competence or incompetence, where we slip into
our natural way of being and doing. Yet, it is precisely in these
situations that we should beware of getting so caught up in the action
that we literally bump into the trees or miss seeing the wood for the
trees. i.e. the observer view 1 needs to interact with the doer view 2
while doing. This is where the interaction between mode 1 and view 2
can be very powerful, resulting in the "reflective practicioner".
The resolution of what kind of mode of behaviour and way of viewing a
situation or phenomenon is most useful and appropriate for our
purposes appears to lie in the third dimension of the matrix:
Space-time, which answers the questions: "Where" and "When" a
particular combination of modes and views is best.
As we have described modes 1, 2 and 3, there is an element of
automaticity about the triggering of global workspace by specialised
processors. When, however, we develop higher order specialised
processors responsible for "learning about learning" and "playing to
learn", we are capable of automatic diagnosis which can trigger
"choice-generation" and "selection" routines with regard to our
internal models. I also believe (but cannot prove) that such routines
underlie the development of all levels of self and consciousness, from
the pleromatic to the atmanic. In other words, default hierarchies of
rules enable new orders of capability to emerge at each level of
development. (Hence Bernard's "La fixite de la milieu interieur est la
precondition pour la vie libre"). The fixity of one level and its
mechanics and properties (for example, the standardisation, lock-in
and self-reference necessary to create a language), enables the next
level of development to work and operate on the previous level. If
cells were unpredictable, organs and tissues could not work. If organs
and tissues were not locked-in to stable patterns (even if the heart
operates on a strange attractor, at least there is an attractor!), we
could not have the dynamics of organ systems and behaviour.

Principle 3: "Intentionality & Emergent Control"
If seeing patterns and acting consciously are two of the most notable
characteristics of adaptive agents acting intelligently, then what
these two things make possible is the active use of intentionality and
emergent control. Under this heading we had three clusters: "When to
conserve & when to change: personal mastery", "Zen and the art of
participant learning: Flow states-flow play" and "Playwork: product vs
me vs process".
Most living systems maintain coherence because they would cease to
exist if they did not. Coherence, however, can operate at many
different levels, from the physical to the transcendental. Anything
which is perceived to interfere with coherence is usually resisted/
defended against, while anything which can enhance coherence is
usually gratefully grasped at.
Change for change's sake is thus not something we find much of in
nature, which seeks to maintain coherence with the minimum of energy
and resources. When to conserve and when to change thus is probably
the most basic question all living systems face, providing they are
conscious of the choice. Being stuck, "being right", often indicate
the need for change.
It is my belief that being consciously aware of our interaction with
others as reflective practicioners in mode 2 view 2 or mode 3 view 2,
we are actually developing another law of nature which goes beyond
"maintain coherence at any cost". This new law is something like:
"Maintain learning, innovation and play at any cost". What is
happening at the leading edge of evolution is that win/win/win
strategies in a world of abundant resources require all of us to learn
more things faster than ever before. In general, compared with
previous ages (agricultural, industrial) there is no shortage of
capital, no shortage of land or labour, but definitely a shortage of
knowledge and understanding and intelligent behaviour. We have reached
the point where the amount of power in our societies is negatively
correlated with intelligence and sustainability.
So, although memes do not like dialogue, we are capable of integrating
new and different memes into new meanings and behaviours, so as to use
our intentionality more powerfully and collaboratively. This is the
art of Zen and participant learning and participant scientific
observation. The ability to pause and reflect on our own affective,
cognitive and physical processes in the heat of the moment is an
evolutionary event: the essence of reflective practice and the "eye
within". Selecting the appropriate world view for the occasion may now
become a force for a different kind of evolutionary process.
In playwork intuition is developed from experience- the residual
wisdom in our specialised processors in collaboration triggers the
global workspace to note when "aha" moments occur. Whether this is due
to the resonance between specialised processor waves (pattern
matching) or whatever, this is clearly a non-linear process, and
We agreed that more rapid learning and intelligence might result from
a focus on learning and adaptive processes, rather than a focus on
products or personalities. In other words, an understanding of the
fundamental principles involved in learning and creativity/play, is
more generalisable and flexible (as a nested set of rules in a default
hierarchy), than a specific toolkit or personality. Yet we also
recognised that specific processes (as in, say "nominal group
technique"), may also limit as much as they enable. Is "Complexity
thinking" more useful in process thinking? Are not processes, products
and personalities all subject to selection? It seemed clear that there
exists an evolutionary trade-off between locking in on one set of
tools and personalities (say in a service business like a
consultancy), and the ability to learn fast and generate new
Principles 4 & 5: "Rules/reward of Interaction" and "Replicators
Interaction Exchange"
These two principles appear closely connected, as the replication of
memes and the rules and rewards of interaction are closely
intertwined. We focused on the fact that the interaction of adaptive
agents takes place through tagging and the representation of tags in
internal models. Just as genes seek vehicles for replication, to too
do memes (as embodied in our cognitive and social systems). Are memes
themselves replicators? Genes co-exist with many other elements in a
cell, without which they could not operate. The co-existence of genes
and cell components, is surely the same analogy as memes to minds? (Or
perhaps, at a finer grain, memes to specialised processors).
So, let us assume form the moment that memes are a form of program
within a specialised processor which contend for resources and airtime
in the bodymind system. Most of the time such memes behave themselves
and are benign. If, however, they begin to run amok like cancer cells,
then they will destroy their host. Bodymind systems have in-built
checks and balances to prevent memes running amok and replicating out
of control. In social systems, however, the advent of the mass media
has enabled memes to go crazy. Political memes run amok (19 years of
Tory memes dominating, then we suddenly switch to 15 years of Tony
memes!); business memes such as brands or pop-groups occasionally
dominate the system (Virgin, Virtual Reality, Body Shop- any big PR
hype is a good epidemiological study). The sturdy western scientific
progress meme may actually be one of the few largely beneficial memes
around, only a little better than the "do unto your neighbour as you
would be done by" meme.
Memes can replicate in any information transmitting or symbolic
system, to the extent that the system will allow. In mode 1 powerful
memes can take control without us realising it- belief systems create
attractors which become irresistible for specific groups of
specialised processors, who then flock around that meme. Contention
between specialised processors is thus critical to health and
maintaining a requisite variety of memes in the system. Beneficial
memes become assimilated into the internal models of adaptive agents,
and even become embedded in the infrastructure of complex adaptive
systems. (Cisco routers on the Internet or Windows/NT/Intel in
networks of PC's, the Catholic Church in Catholic countries).
Memes can be specialists or generalists- this is set out in Holland's
description of general conditions which contain less information but
are tested in a wider variety of circumstances (they respond to more
messages or tags). These general conditions are the basis of the
default hierarchy in rule sets, whereas more specific conditions
process fewer messages and have more precise effects as building
blocks (Much like the genes for haemophilia or red hair are more
specialised than those for say, making the skin).

This was distributed via the memetics list associated with the
Journal of Memetics - Evolutionary Models of Information Transmission
For information about the journal and the list (e.g. unsubscribing)