[Next] [Previous] [Up] [Top] [Contents]
2 Criteria for a Good Model of Agent Learning
The agent will have some language to represent its models. In fact, it can only represent its models by expressions in this language. This creates a mapping between expressions in the language and subspaces of possibilities, that corresponds to the distinction between the syntax and semantics of a logical language. In AI parlance this is called a Logical Bias. This fact has several consequences, including these listed below.
- Only some subspaces will be expressible in the language, the agent may be forced to approximate the actual subspace of possibilities with an expressible one.
- Some subspaces will have several corresponding expressions, some of which will be far more efficient to use than others.
- Although the agent will often know what the space of possibilities in some theoretical sense, she will often not know what expressions are necessary to describe a particular subspace, thus for all practical purposes the space of possibilities is unknown and unknowable.
- A global search of possible subspaces is impractical, because the agent has to do this by searching through possible expressions in her language. This involves a different kind of search than that of paramaterising a known agent model.
- If there is any significant cost associated with building the agent model expressions then some sort of incremental, path-dependent development of agent models is inevitable.
- The agent may come to believe in inadequate or even partially inconsistent expressions. These would be less likely to be successful models, but are not automatically ruled out (depending on the agent language).
Modelling Learning as Modelling - 23 FEB 98
[Next] [Previous] [Up] [Top] [Contents]
Generated with CERN WebMaker