From: Chris Taylor (chris.taylor@ebi.ac.uk)
Date: Wed 17 May 2006 - 09:36:49 GMT
Lol.
Okay so I wouldn't want to get mired in the detail here but this 
is the kind of thing I'd love to see more of in a sense: That we 
(er, well not me) try to link up the kinds of (really 
horrendously complex) 'attractors' set up in the brain (that'd 
be a mind), manifest (rather cryptically) in neuronal activity, 
to the actual (sensory and/or internally-generated) inputs and 
(motor and/or internally-modifying [of internal informational 
structure]) outputs.
Consider these neural net thingies -- for not much real 
infrastructure some sort of attractor appears and is then 
shaped, like clay on a wheel, by directed learning of some sort. 
Then this wonderful informational pattern (in a manner similar 
to some of the more complex 'Game of Life' patterns) absorbs 
input patterns and then burps out some sort of output before 
'settling' back into a 'standby mode' again.
Presumably this 'settling' takes some time (ms for a n.n.?). 
Perhaps our continual consciousness is like trying to settle 
onto this kind of low-power state but just as orbit is kind of 
falling where you keep missing the planet, our continual thought 
is like trying to find such a stable state (ESS?) and missing... 
Ecologies are like this also -- always trying (in a sense) to 
stabilise but never making it -- irresolvable conflicts, 
semi-stable compromises result.
Maybe the self-reference mirror neuron stuff sets up a kind of 
(very complex) feedback that reduces the likelihood of 
'settling' by keeping thoughts alive in some sense and 
increasing connectivity (now I really am BSing).
The larger the set of nodes in a n.n., the more functionality 
can be compacted into this overall attractor (forgive me, that 
is just the nearest I can get to an appropriate term) that 
'lives' in the nodes and arcs; the ability to accurately 
classify twenty letters (OCR-style) instead of doing ten well 
and averaging other (similar) ones (R/B for example).
Anyway I'd love to see more 'bottom up' efforts; obviously not 
actually, but through modelling and thought experiments etc. 
Reductionism has been criticised here, but to rebuild a system 
and see what 'pops out' will be waaay more informative than any 
amount of storytelling from the top down (ultimately untestable 
no matter how well such a model might perform).
On frontal lobes, can we see them as 'weaving' new components to 
add into the overall attractor (becoming 'stored programs' or 
habits)? That way repeat stimuli would resonate with some 
component of the existing attractor. I also see that we get 
averaging (compact storage by making Platonic 'kinds' if you 
like) this way -- remember that the most attractive faces are 
the most average -- simply most representative of what a human 
should look like..? _Very_ functional to go for the average when 
trying to produce 'fit' offspring -- how's that for direct 
selection for a feature of this system we all possess!
Can 'resonances' between features/components of that attractor 
in certain states, and say the premotor cortex, result in 
action? (i.e. in a much simplified version, a 20 Hz wave 
actually acts as a trigger to a stored program of producing a 
second wave -- a resonant frequency). Do senses, processed 
through the appropriate areas, 'add in' components to the 
overall attractor?
Sleep would appear to be about stopping the input and turning 
off the 'linearisation machinery' (that'd be conscious us) to 
allow some sort of equilibration ('settling'). I dunno 
(clearly...) I'd just like to see some of this stuff get a 
little more joined up. A good model of the basics should start 
to give insights into how we are and hopefully make explaining 
the evolutionary route to this mind more straightforward.
And finally, while I'm wishing, is _anyone_ _ever_ gonna produce 
even one percent of a satisfactory explanation of why it feels 
like this? Why, despite all this theory of mind blah blah, am I 
not a 'robotic' creature? Frankly I see neither why 'I' have to 
exist to survive (self-awareness and theory of mind could 
perfectly well be a feature of a robotic me imho -- so why am 
'I' selected for -- I don't believe in spandrels), nor why on 
earth 'I' experience being me this way (i.e. how is it that 'I' 
am and that 'I' feel and that it feels like this). I think we'll 
know the right explanations when we hear them; and obviously the 
latter (how 'I' am) is much more difficult than the former (why 
'I' am).
Cheers, Chris.
Derek Gatherer wrote:
> Oh come on Chris, who do you think you are kidding?
> 
> 
>> Keep zooming and you get to the neuron, axon(PULSE)/dendrites(WAVE) 
>> and the
>> synchronisation dynamics of inhibit/excite to the cell soma as well as
>> neuromodulator densities in the synaptic gaps influencing emotional
>> exaggerations/dampening.
> 
> That's just nonsense.  Dendrites are input, axons are output.
> 
> And as for this:
> 
> 
>> Instincts get encoded into dendrite areas etc (and so post synaptic) and
>> allow for context to PUSH the life form and so conserve energy.
> 
> What can I say?  If anybody knew where instincts were encoded, they'd be 
> in Stockholm picking up their Nobel Prize.
> 
> Grumpy (but justifably so) Derek
> 
> ===============================================================
> This was distributed via the memetics list associated with the
> Journal of Memetics - Evolutionary Models of Information Transmission
> For information about the journal and the list (e.g. unsubscribing)
> see: http://www.cpm.mmu.ac.uk/jom-emit
> 
-- ~~~~~~~~~~~~~~~~~~~~~~~~ chris.taylor@ebi.ac.uk http://psidev.sf.net/ ~~~~~~~~~~~~~~~~~~~~~~~~ =============================================================== This was distributed via the memetics list associated with the Journal of Memetics - Evolutionary Models of Information Transmission For information about the journal and the list (e.g. unsubscribing) see: http://www.cpm.mmu.ac.uk/jom-emit
This archive was generated by hypermail 2.1.5 : Wed 17 May 2006 - 10:10:51 GMT