[Next] [Previous] [Top] [Contents]

Modelling Learning as Modelling

6 Some Possible Agent Strategies

The difficult aspect of modelling learning as modelling is to represent the ways in which the agent adapts its model based on the actions it has taken and the input it has received. In this section, we describe one possible representation of the information flows involved. A legitimate and important issue is surely the choice or development of the representation of the agent modelling procedure.

If we are looking at abstract environments with no direct, empirical referent, then the purpose of the representation of the modelling procedure is to show that a particular learning procedure will generate results which an agent would find desirable or, alternatively, that in the simulated environment, some modelling procedures are likely to yield better values of target variables than other learning procedures. This sort of result is not obviously less relevant or important than the standard economic procedure of demonstrating sufficient but not necessary conditions for the existence of equilibrium.

An alternative use of the learning-as-modelling paradigm is to represent aspects of actual economic environments. In these cases, the measure of the goodness of the learning representation is the extent to which the simulation models generate output which conforms statistically or qualitatively to observed outcomes. We recognize that these arguments will carry no weight with those for whom there is no economics without equilibrium. But the developments reported here will, in any case, be of little relevance to such economists.

In figure 2, the shaded box contains elements of models of learning described in table 1. All of these procedures yield relations between control variables and goals. Apart from the logic and modelling representations, none of the procedures described in table 1 entail any clear distinction between model and memory. In effect, the internal state of the agent reflects the patterns of past successes and failures of different actions. Production systems and logics, however, do clearly distinguish between memory and model as, so we shall see, must any representation of learning as modelling. The mapping from the internal model into actions is a non-trivial task. Semantic nets, neural networks and genetic algorithms effectively conflate model and action by creating relations only among control variables on the one hand and goals on the other. These relations evolve as changes in weights attaching to network links in the case of neural networks. Where production systems and logics are concerned, the relations of which a model is comprised can be mnemonic and explicit. Simple models can be built up into more complex models. The way in which model complexity increases can itself become a subject of study.

In figure 3, we depict the flow chart of an algorithm which lends itself readily to implementation as a production system. The natural starting point in this flow chart is the question: Any appropriate models? Initially, there would be no model other than the universal or some default model. By definition, only special models can be appropriate bases for action since, without a relevant special model the universal model says only that anything can happen..

A model is appropriate if it has conditions of application that are satisfied in the current state of the environment. If there is no appropriate model in this sense, then there must be some means of conjecturing new models. Typically, such conjectures will be formulated or, at least, tested on the basis of past data. Respecification and re-estimation may continue until there is some model which is deemed to be "good enough" in the circumstances or, simply, the modeller decides that no more resources are to be devoted to the development of an appropriate model. If there is already at least one model with satisfied conditions of application, then there is no need at this stage to specify additional models. In either case, choose a model which is "best" according to some set of criteria.

It would be natural to use the "best" of the appropriate models to forecast the effects on goals of different values of the decision variables and, on the basis of such forecasts, to set the decision variable value which, in this context, is what we mean by an action.

The next question is whether the action had the intended effect. If not, the model is disconfirmed. But it might be that the same model had previously been used successfully to determine an action. In that case, it would be appropriate to try to distinguish the conditions in which the model succeeded from those in which it failed. This will involve some attempt to specialize the model by adding conditions of application. The resulting specialized model is then subject to the same tests as any other newly specified model.

A model which serves successfully as a guide to action might, in combination with other successful models, be generalizable. One possibility is to see if there is a meaningful intersection in their respective conditions of application as well as some suitable way of combining their definitions, independent and dependent variables. In this way, we would have one model which was applicable in a wider set of conditions with a domain and range no less than that of the constituent models taken together. As indicated in the flow chart in figure 3, combinations of special model should have the effect of reducing the volume of the universal model if they are to be desirable.

6.1 - Example strategies for agent model development

Modelling Learning as Modelling - 23 FEB 98
[Next] [Previous] [Top] [Contents]

Generated with CERN WebMaker