This mobile robot also has no sensors - it has a representation of things in its world, but the symbols used in that representaiton are grounded as described here:
@article{IwR,
author ="Peter Wallis",
title ="Intention without Representation",
journal ="Philosophical Psychology",
publisher = "Cambridge University Press",
year = 2004,
volume =2,
month = "June"
}
The input to decision making is energy usage. The Noo Noo's
snout swings with a natural rythm and if that is interrupted, the
servos need to work harder. The snout then "searches" and, as a
reactive layer, changes the direction of travel. The map of
our kitchen it contains is effectively a map of places that are bad
for it's enery usage.
One day I hope to show this to Professor Boden and see if she
thinks it is an implementation of symbol grounding in terms of
metabolism...
conference{sg2yo,
author="Peter Wallis",
title="The Symbol Grounding Problem for Two Year Olds",
booktitle="Computers \& Philosophy, an International Conference
(i-CaP)",
address="Laval, France",
year=2006,
note="Gives an explicit description of grounding symbolic
reasoning in
terms of previously learnt behaviours",
}
@conference{attIntBel,
author="Peter Wallis",
title="Attention, Intention, and the Nature of Believability",
booktitle="Computers \& Philosophy, an International Conference
(i-CaP)",
address="Laval, France",
year=2006,
note="Argues that believable action requires an agent fit with
the
Dennett's intentional stance. (Dennett was in the audience!)"
}
Back to home page