[Next] [Previous] [Up] [Top] [Contents]

5 Some Example Applications

5.1 Tree-structured expressions for modelling agent states


Here we relax the assumption that an agent has perfect knowledge of its own utility function. So it has to learn it by making trial models, comparing these with known past information and construct new models. Even given that an agent may know that more utility is likely to be gained by buying more of a product, the trade-offs between two products can be complex. The agent has an large function space to search. Further, the agent's representation of its utility function can effect what conclusions it reaches about the best purchasing policy, if it is only boundedly rational.

A model of such an agent which dynamically develops its own guess at its utility function, shows several interesting features which are absent from the traditional utility optimization approach. For example, some consumers get locked-in to an inferior policy for considerable periods of time, because their preferred model suggests a path of action that does not discomfort this very model - this might correspond to a consumer who discovers an adequate product early on and then never experiments with others (which may be better). Another interesting feature is that the efficiency of the agent's spending depends critically on its internal language of representation and the maximum complexity of its internal models. For some more details see [2].


The Role of Expressiveness in Modelling Structural Change - Bruce Edmonds - 16 MAY 96
[Next] [Previous] [Up] [Top] [Contents]

Generated with CERN WebMaker