From: "Chris Lofting" <ddiamond@ozemail.com.au>
To: <memetics@mmu.ac.uk>
Subject: Re: Request for information - music, memes, and resonance
Date: Mon, 19 Jul 1999 01:16:29 +1000
Sean,
firstly I Suggest having of a read of
(a) the abstract at the end of this email.. (after my signature)
and (b) after that an article from the net (with an email address to see
more)
With all that data in mind then consider the following (in the context of
memes) ...
The left/right hemisphere distinctions in music processing have been
repeated elsewhere (Note that (a) we are dealing with BIASES and (b) the
left/right hemisphere categorisation seems to be more that of left/right
THREADS which are then woven together neurologically throughout the brain)
With this in mind the next step is to note that the left/right BIASES are to
the distinctions of whole/parts (left) and relationships( right), the latter
in the form of static/dynamic and strongly context sensitive and the former
(left) more self-contained where everything is identified.
(Note that behaviourally, in a very general sense, there is a bias to males
being more self-contained and females being more context sensitive. I say
this in the context of some remarks in the abstract however in this day of
high education we also need to note that there is a seperation occuring
where mind does not necessarily reflect gender; mind being more flexible and
so able to incorporate and so unite both perspectives)
The left/right distincts made so far suggest a distinction of tonic from
harmonics where the relational bias of the latter favour right brain
processing as well as GENERAL processing. For example, in processing
ideograms, there is right hemisphere activity suggesting attempts to scan
the context to aid in determining the correct interpretation for the
ideogram; the ideogram is thus not 'precise' but more an approximation of
meaning and one has to scan the relationships with the surrounding ideograms
to zoom-in to a precise meaning.
Harmonics are also approximations within the fundamental context of the
whole song which includes the song's key. (Right hemisphere responds better
to harmonics analysis and this includes colour (harmonic of vision) and
chords (harmonics of single tone), the left is more single context, the
right more multi context. These are BIASES of course, there will be
particular variations on this general theme)
The emphasis on precision linked to the left hemisphere in other research
suggests that trained musicians would not have to 'search the context' to
get the meaning, their training has encapsulated everything to such a degree
that playing and reading becomes a habit; stimulus/response. You only need
wide context analysis when you are learning in that it is the source of
feedback. Once you have learnt things you drop the feedback, you no longer
consider your response, you just 'do it'.
If you treat this as a complexity issue then the encapsulation, the
objectification, we see in the left is like an intense strange attractor.
In the context of memes, any music is transmitted as a waveform and as such
can set your emotions resonating and bypassing your consciousness that often
acts to filter data, it acts like a psychic wall and the only way to walk
through walls is to use waves.
This music can then be used as a carrier for a carefully designed message --
advertising does this.
In general, we seem to communicate by making distinctions of objects and
relationships and this structuring is applied to all senses, but in
particular to vision and audition.
You can make a code out of this, abstract chains of 'meanings' that are then
particularised in the form of a sense and then particularised further with
words, rhythmic pattern etc.
Since this same code is used in mathematics, the intuitive link between
maths and music is easy to see in that both disciplines use the same
communiciations method. Furthermore, in the context of precision, music is
better than vision.
There is a little on emergence of scale and categorisation at the end of a
page at my website:
http://www.ozemail.com.au/~ddiamond/onemany.html
best,
Chris.
http://www.ozemail.com.au/~ddiamond
==========
ABSTRACT from the journal "Brain Vol 122, No 1, 75-85 January 1999"
The cerebral haemodynamics of music perception
A transcranial Doppler sonography study
Stefan Evers1, Jörn Dannert2, Daniel Rödding1, Günther Rötter2 and E.-Bernd
Ringelstein1
1 Department of Neurology, University of Münster and 2 Department of Music,
University of Dortmund, Germany
Correspondence to: Stefan Evers, MD, Department of Neurology, University of
Münster, Albert-Schweitzer-Str. 33, D-48129 Münster, Germany E-mail:
everss@uni-muenster.de
The perception of music has been investigated by several neurophysiological
and neuroimaging methods. Results from these studies suggest a right
hemisphere dominance for non-musicians and a possible left hemisphere
dominance for musicians. However, inconsistent results have been obtained,
and not all variables have been controlled by the different methods. We
performed a study with functional transcranial Doppler sonography (fTCD) of
the middle cerebral artery to evaluate changes in cerebral blood flow
velocity (CBFV) during different periods of music perception. Twenty-four
healthy right-handed subjects were enrolled and examined during rest and
during listening to periods of music with predominant language, rhythm and
harmony content. The gender, musical experience and mode of listening of the
subjects were chosen as independent factors; the type of music was included
as the variable in repeated measurements. We observed a significant increase
of CBFV in the right hemisphere in non-musicians during harmony perception
but not during rhythm perception; this effect was more pronounced in
females. Language perception was lateralized to the left hemisphere in all
subject groups. Musicians showed increased CBFV values in the left
hemisphere which were independent of the type of stimulus, and background
listeners showed increased CBFV values during harmony perception in the
right hemisphere which were independent of their musical experience. The
time taken to reach the peak of CBFV was significantly longer in
non-musicians when compared with musicians during rhythm and harmony
perception. Pulse rates were significantly decreased in non-musicians during
harmony perception, probably due to a specific relaxation effect in this
subgroup. The resistance index did not show any significant differences,
suggesting only regional changes of small resistance vessels but not of
large arteries. Our fTCD study confirms previous findings of right
hemisphere lateralization for harmony perception in non-musicians. In
addition, we showed that this effect is more pronounced in female subjects
and in background listeners and that the lateralization is delayed in
non-musicians compared with musicians for the perception of rhythm and
harmony stimuli. Our data suggest that musicians and non-musicians have
different strategies to lateralize musical stimuli, with a delayed but
marked right hemisphere lateralization during harmony perception in
non-musicians and an attentive mode of listening contributing to a left
hemisphere lateralization in musicians.
Keywords: music perception; transcranial Doppler sonography; hemispheric
lateralization; cerebral haemodynamics
Abbreviations: CBFV = cerebral blood flow velocity; RI = resistance index;
SPECT = single photon emission tomography; (f)TCD = (functional)
transcranial Doppler sonography
---------- second article -------
>From http://musica.cnlm.uci.edu/mrn/V1I2F94.html:
Musical Building Blocks in the Brain
Our own private experience of the world is seamless, a smooth and continous
flow of sensory impressions and perceptions of objects and events, sights
and sounds. When we see an object, such as a red ball, we do not experience
the shape of the ball separately from its color. And when we hear a violin,
we do not perceive its pitch separately from its timbre. The notes in a
chord are not heard as several individuals but rather in a more wholistic
fashion. Yes, it is possible to learn to pay more attention to one feature
of a composition at the expense of attention to other features. But this
process does not fractionate the sound into its all of its separate
constituents, the building blocks of music such as pitch, contour, interval,
harmony, melody, timbre (tone color) , and rhythm.
Because our experience is so immediate, clear and effortless, we tend to
take it for granted. However, the integrated nature of our musical and other
experiences constitutes a major puzzle for brain scientists who search for
the answer to how our brains apparently effortlessly meld all of these
aspects of sound into a meaningful whole, that presents to us personally ...
music. An unlikely answer is that our brains are specialized for music so
that each of music's building blocks is processed by a different part of the
brain. The simultaneous activation of these many special purpose processors
would constitute the wholistic experience. In other words, there is no
little neural person in our brains who is listening to the music and then
telling us what it is. Although this type of idea has often been popular, it
leaves us with having to explain how the little brain genie achieves
wholistic perception of music, and so explains nothing.
An increasing amount of research findings support the first theory, that the
brain is specialized for the building blocks of music. In the first issue of
Musica Research Notes, we reviewed the evidence that the highest level of
the auditory system, the auditory cortex, processes pitch rather than raw
sound frequencies (see "A Note on Pitch", volume 1, number 1, Spring, 1994).
Additionally, there are individual brain cells that process melodic contour,
the pattern of increasing and decreasing notes in music (1). Cells have been
found in the auditory cortex that seem likely to process specific harmonic
relationships, such as the simultaneous presentation of the second and third
harmonics of a note (2). Temporal, including rhythmic, aspects of sound
streams also seem to be handled by certain cells in particular parts of the
auditory cortex (3).
Findings from humans who have suffered damage to the auditory cortex by
stroke or by surgery to correct intractible epilepsy are particularly
fascinating. For example, damage to the right hemisphere selectively impairs
the ability to process timbre (4). Also, the processing of melody and rhythm
can be separated by specific brain lesions. Some patients show impaired
discrimination of melodies while they have normal discrimination of rhythms,
and vice versa for lesions in different regions (5). And even different
aspects of the processing of temporal information seem to be handled by
different parts of the auditory cortex, rhythm by the left hemisphere and
beat (meter) by the right hemispheres (6).
These dissociations of the elements of music in neurologically impaired
persons provide strong support for "building block" theory but might be
questioned by some on the grounds that the findings do not come from normal
people. This is not a very strong criticism because such patients can show
completely normal levels of performance on the capabilities that remain. In
any event, there are findings from intact people that support and complement
these neuropsychological findings. It is possible to determine which areas
of the brain are active during various tasks, including listening to music.
One powerful method is to measure increases in the regional distribution of
blood flow to parts of the cerebral cortex because these reflect the
increased metabolic needs of brain cells that are active. In a recent study,
normal subjects were tested in two passive listening conditions, noise
bursts or music matched for sound frequencies, and two active judgement
conditions, comparing the pitch of the first two notes of melodies or the
first and last notes of melodies (7). Listening to melodies produced an
activation of the right temporal (auditory) hemisphere relative to the left
("language") hemisphere. Comparing notes, which also involved short term
memory, also showed a preferential activation of the right auditory cortical
system, plus some other areas of the right hemisphere. These findings
indicate that there are specialized neural substrates in the auditory cortex
of the right hemisphere that process melodies vs. other non-melodic sounds.
Space limitations preclude a more comprehensive review. However, these
examples should suffice to highlight the many types of evidence, from
animals, the neurologically impaired and the normal human, that the brain
contains an organization that is specialized to process the individual
elements of music, the building blocks of music. These findings have
relevance to basic neurobiological problems, to clinical and therapeutic
approaches to treatment and last, but not at all least, to the realization
that music has a deep biological basis.
Footnotes
(1)Weinberger N.M. & McKenna, T.M. (1988).Sensitivity of single neurons in
auditory cortex to contour: toward a neurophysiology of music perception.
Music Perception, 5:355-390.
Espinoza,I.E. & Gerstein, G.L. (1988). Cortical auditory neuron interactions
during presentation of 3-tone sequences: effective connectivity. Brain
Research, 450: 39-50.
(2) Sutter, M.L., & Schreiner, C.E. (1991). Physiology and topography of
neurons with multipeaked tuning curves in cat primary auditory cortex.
Journal of Neurophysiology, 65: 1207-1226.
(3) Hose, B., Langner, G., & Scheich, H. (1987). Topographic representation
of periodicities in the forebrain of the mynah bird: one map for pitch and
rhythm? Brain Research, 422: 367-373.
Buchfellner, E., Lepplesack, H-J., Klump, G.M., & Hausler, U. (1989). Gap
detection in the starling (Sternus vulgaris): II. Coding of gaps by
forebrain neurons. Journal of Comparative Physiology, 164: 539-549.
Ison, J.R., O'Connon, K., Bowen, G.P., & Bocirnea, A. (1991). Temporal
resolution of gaps in noise by the rat is lost with functional
decorticaiton. Behavioral Neuroscience, 105: 33-40.
(4) Samson, S. & Zatorre, R.J. (1994). Contribution of the right temporal
lobe to musical timbre discrimination. Neuropsychologia, 32: 231-240.
(5) Peretz, I., (1990). Processing of local and global musical information
in unilateral brain-damaged patients. Brain, 113: 1185-1205.
(6) For a general review of brain specializations for music see Peretz, I.,
& Morais, J. (1993). Specificity for music. In: F. Boller, & J. Grafman,
(Eds.) Handbook of Neuropsychology, 8: Amsterdam, Elsevier Science
Publishers.
(7) Zatorre, R.J., Evans, A.C. & Meyer, E. (1994). Neural mechanisms
underlying melodic perception and memory for pitch. The Journal of
Neuroscience, 14: 1908-1919.
===============================================================
This was distributed via the memetics list associated with the
Journal of Memetics - Evolutionary Models of Information Transmission
For information about the journal and the list (e.g. unsubscribing)
see: http://www.cpm.mmu.ac.uk/jom-emit