Workshop "Techniques for modelling structural change" Manchester May 20-21, 1996 The potential of microsimulation cum artificial intelligence tools for modelling structural macroeconomic change: a note first draft, May 14, 1996 Ge rard Ballot Universit Panth on-Assas (Paris II) and ERMES-CNRS 1. The objectives A important topic in Economics is the study of the impact of institutional policies on macroeconomic aggregates on one hand, and the size distribution of incomes on the other. Long run effects are specially interesting, notably if growth is endogeneous. A very large part of Government and Administration decisions do not deal with decision on quantities directly, such as the level of the interest rate or the quantity of money, or public expenditures. Even when economic objectives are sought, many decisions consist in changing the rules of the game for the private actors, removing some constraints, setting others, introducing some incentives at the microeconomic level. Macroeconomic models are very ill fitted for the evaluation of the aggregate effects. They have nothing to say about the size distribution of incomes. Adressing these questions implies modelling tools which deal with heterogeneity and represent the institutional structures ( laws...) and individuals'response at a microeconomic level. Complex interactions take place between the agents, and non linearities appear. The Macroeconomic coordination which emerges may be very different from the underlying micro-responses, and from an aggregate model based on representative agents. Moreover decision making in this complex world cannot be modelled by standard optimization techniques, because of the impossibility of making predictions (non probabilistic change, complexity), but should be based on learning. Two modelling tools have been developed to cope with these questions, and we argue that they have a high potential when used together: microsimulation, and artificial intellligence tools for decision making. 2. Microsimulation Microsimulation or "microanalytic simulation" has been invented by G.Orcutt [1961] to study the distributional and the macroeconomic impact of microeconomic policies (Ballot [1987] for an extensive review). A sample of each type of agent is represented explicitly on the computer, with its vector of characteristics. Decision algorithms or probabilities are assigned to the agents. At the end of the period, these characteristics are modified by the realization of the events. Aggregate variables are computed. The models are then reduced scale models of the economy, which naturally also have to be simpler than the real world. In principle, all relations are micro, except some Government decisions (setting the interest rate...). These models can then represent institutions (the rules of the game) explicitly. They also reproduce important features of the world: markets need not clear, and the evolution is open. There is no constraint of a long run equilibrium. These model are neither disequilibrium models nor equilibrium models. They need not refer to an equilibrium concept. However Orcutt's stress on estimating each individual's response from microdata was so exacting that no model of markets was built. He studied only the behavior of individuals, letting aside the behavior of firms, and therefore the interactions between individuals and firms. The focus has then been more on studying the effects of policies on the size distribution of incomes, or longitudinal problems implying the simulation of a cohort over its life cycle (social security). A large set of models have been built by government agencies in U.S., Australia, Canada, Nordic countries, or more recently U.K.(Harding [1996]). The focus is on the detailed statistics, and the behavioral content is very poor. Sometimes only the direct impact of detailed institutional measures (tax system...) is modelled, sometimes the static response is estimated, but the dynamic response (over several periods) and the feedbacks through markets are absent. The major conceptual issues are however (a) those relating to the macrofeedbacks, (b) those relating to the modelling of microdecisions, and (c) those concerning the validation. Some models have made progress in one or more of these directions. (a)Macrofeedbacks.Orcutt has initiated some models (DYNASIM) in which a macromodule is appended to the micro based part (Orcutt et al. [1977]). This procedure is now often used (see papers in Harding [1996]). However it does not allow for the interactions on markets and the non linearities that they may produce.Bennett and Bergmann [1986] have built a fairly complete macromodel with both explicit individuals, firms, and banks. It is the first truly microbased macromodel in my opinion. The defects are that the decisions are fairly ad hoc, and they do not use a sample for firms, because they only have twelve firms. Eliasson [1978] and his team have built a very complete multisectoral macroeconomic model with explicit markets. Only the firms (in four sectors) are individualized, but the majority are real firms at the beginning of the simulations, which avoids erratic behavior and a lot of exit in the first periods. . (b)Microdecisions . An important question is the methodology for designing the operators which modify the characteristics of the agents. There are different choices. (1) Orcutt relies on probabilities and makes little use of microeconomic theory. (2) Fair [1974] at the other extreme uses optimal control based decision in the theoretical part of his macromodel. However, if he integrates markets, at most two agents of a type are represented. Moreover the empirical part is a macroeconometric model with macro relations estimated, and therefore not integrated. (3) A more interesting track is taken by Eliasson [1978] in MOSES, in which many decisions have been inspired by a detailed enquiry in Swedish firms. It features bounded rationality, and trial and error learning in a Schumpeterian environment. Nelson & Winter [1981] also use boundedly rational rules and satisficing behavior to model technological progress. (4) Ballot ([1981][1988]) simplifies microeconomic theory, particularly search theory algorithms by individuals and by firms to model the labour market, introducing some myopia and restricting the information set of agents.. (c)Validation is a third major question. These models are highly non-linear, and cannot be approximated by linear relations. The parameters cannot be estimated through usual inference methods. However first some parameters may be estimated on microeconomic data (preferably panel data). Secondly it is feasible to design and use some calibration algorithms which minimize some distance between real and simulated time series. Bennett & Bermann [1986] give some thoughts to the problem to set some 350 parameters. Taymaz [1993] has designed for MOSES such an algorithm which converges in less than 50 runs even for several dozens of parameters. Moreover, for each specification of the model, it is advisable to run a fairly large numbers of simulations while changing only the seed of the pseudo-random numbers generator. Then one computes average values for some variables. In MOSES, we have observed that the mean value does not change in a significant way after 40 runs (Ballot & Taymaz [1996]). Concerning the scale of the model, Orcutt has noted that samples of 1000 individuals yield results the quality of which is not much lower than in much larger samples. Much more research should be devoted to calibration. However validation of these structural models should not be evaluated only on the basis of the goodness of fit. These models are sometimes able to simulate economic processes that experts in the field can recognize as real, because of the integration of many institutional structures and the heterogeneity of agents. For instance simulated patterns of mobility on the labour market in Ballot [1988] are very different between demographic groups, and according to the cycle, and correspond to what we have learned from numerous partial studies. The same can be said for patterns of inequality measures computed from the simulated data. Process validation is important (see Nelson & Winter [1982] for a similar plea). We will find an example with the lock-in and paradigm These models bring novel results. Bennett & Bergmann [1986] study the net effects of policies which go through complex mechanisms. Eliasson [1991] shows that improving static efficiency can lead to dynamic inefficiency (lower growth rates or crises), because static efficiency imposes strong competition and uniformity of surviving firms. When an endogeneous recession occurs, many quasi-identical firms fail. Diversity allowed by static inefficiency dampens the shock.Ballot [1988] shows that internal labour markets which protect incumbent workers from unemployment yield a massive aggregate unemployment, if some flexibility is not allowed by short term contracts for a small proportion of workers. New types of adjustment are selected by agents within a set of possible adjustments (downgrading, type of contract, search unemployment, exit).The limit is that the set of possible adjustment is fixed by the model builder, and that there is no institutional innovation. Structural change can occur in some microsimulation models, but is limited in the long run by the lack of mechanisms to generate rules and institutions. 3. Artificial intelligence tools in microsimulation: artificial world models I will be fairly idiosyncratic in this section, which is illustrated by the paper proposed for the workshop (Ballot & Taymaz [1996]), since this paper deals with macroeconomics (and distribution), and models are not numerous (see some models in Sargent [1993]. I presently tend to see AI tools as a means to generate new "objects"such as technologies and rules for taking decisions. A further step I have seen no model of would be the modelling of new formal institutions (a new unemployment insurance system for instance).However new macroeconomic coordination patterns can emerge from the interaction of the heterogeneous agents.. Decision making by agents in the complex environment of a microsimulation model clearly calls for learning mechanisms.Several tools are available. I will only give a mention to neural networks which have been used in an evolutionary macroeconomic model by Langrognet & Merlateau [1994]in which firms learn the size of the R&D department, and the proportion of skilled over unskilled workers.. Genetic algorithms (GA), initiated by Holland [1975], constitute a very flexible tool for modelling the changes in technology at the firm level.Ballot & Taymaz [1996] represent a technology as a set of techniques, each alternative having a value written in a binary alphabet, without loss of generality (if there are more than two alternatives, an alternative takes then more than one element). A technology is then a string of zeros and ones (with lenght 40 in our model). The most productive technology, as defined by the output/ labour ratio and the capital /labour ratio, is represented in the same way, and named the global technology. The firm which would use all the techniques in the set of the global economy would be technically the most productive. However no firm knows what technology is the global technology, even if it happened to use it. Firms typically know only a part of the global technology. The productivity of their technology is defined by the degree of correspondance between the global technology and their technology. We have used a very simple measure of aggregate distance which is a weighted sum of the differences of values for each technique. GAs constitute a tool for representing a search for the improvements in the technology, without knowing the optimum (the global technology), but in a more efficient way than random search, because it is biased towards better technologies. It is just a tool, but with several interesting features for an economic analysis of innovation, even though the it may appear as a very abstract and poor representation of research and innovation from a technical point of view. First it allows to distinguish research within the firm (recombining existing technologies: recombination, finding a new technique:mutation), and research which adapts other firms technologies. Diffusion can be dealt with, hence appropriation problems (patents, licenses) and technology policies towards diffusion. Secondly the probability of a change can be made to depend on resources spent (R&D) and the competence base, with a stress on general human capital in Ballot & Taymaz [1996]. The extent of a change, i.e. the number of techniques changed, can be made to depend on these variables and others. The productive characteristics of a global technology define a technological paradigm. An important discovery may open the way to a new global technology, and a whole set of technologies which use part of the techniques of this global technology. For instance the steam power era constitutes a paradigm, while the electricity era would be another one. All the techniques are valued differently in the new paradigm. While improvements within a paradigm represent incremental technical change in our model, the change in paradigm represents radical technical change. The latter constitutes a discontinuity in the rythm of technical change which has been stressed by Schumpeter [1943] and later students of the history of technical change. This use constitutes a third virtue of GAs. A fourth property is the ability to set starting technologies within the new paradigm which are less efficient than mature technologies within the former paradigm. Setting the techniques of the starting technology of a firm by random draw does the job. Some new paradigms may even have less potential than the incumbent since this potential is unknown. Moving from one paradigm to another is then for a firm risky in the short run. If the innovator is not followed, its technology does not improve through incremental innovation yielded by diffusion.. If the innovator is followed by some firms, the improvement gets faster and attracts more firms. Increasing returns to adoption occur (Arthur [1988]). Very few paradigms among the set of possible paradigms are in use at a time. All these features (and some others) are integrated in the paper by Ballot & Taymaz [1996] "Human Capital, Technological Lock-in and Evolutionary Dynamics". The first and main macroeconomic result is the possibility that all or most of the firms lock in a paradigm for a long time, instead of moving progressively from a paradigm to a higher paradigm. The second result is the possibility that firms distribute themselves between a small number of paradigms (usually two) instead of one and lock in one of these. Increasing returns to adoption explain the lock-in.These three patterns of evolution have very different macroeconomic consequences. We have computed the average statistics over 80 runs. The growth rate of manufacturing output is much higher in the smooth transition case than in the lock-in patterns, since in the latter case, there are decreasing returns when the firms get close to the global technology in a given paradigm. Growth can stop.The puzzling result of the model is than a random event may be sufficient to tilt an otherwise identical economy into any of the three patterns. Differences of growth rates between Nations can be caused by historical or small economic events. The paper studies different Technology policies to investigate if lock-in can be avoided with certainty. It cannot, but its probability can be diminished.A third result is that contrary to former models of lock-in, the model has a rich enough structure due to the heterogeneity of firms, to allow for lock-out in the long run. The model then mimics History better. A fourth result is that diffusion is a necessary feature for obtaining a change in the dominant paradigm (Ballot & Taymaz [1994]). A fifth result, related to the time needed to build competences to innovate or imitate, and equipment incorporating the new technologies, change from one paradigm to another take years at the level of an economy. Ponctuated equilibria, in the sense of long periods of stability separated by short periods of disorderly transitions (Somit & Petersen [1989]), do not seem to occur, although more work should be done on the topic.GAs then appear as a flexible tool to model important aspects of structural change. Classifier systems, also initiated by Holland [1976], offer large possibilities for modelling learning. Agents create new rules of decision, on the basis of observed past performance, with GAs the tool for generating these new rules. A classifier system is a list of "if-then" statements (or rules) called classifiers that map conditions into actions, and a set of algorithms which evaluate the performance of the rule and generate new rules. Rule discovery is not random but biased by the system's accumulated experience. A classifier system does not require that the firm builds a causal theory of the workings of an economy in order to take decisions as the rational expectations principle would require, an impossible task in a complex world. It does not even require consistency. Some rules are contradictory with one another. In the previous model, Ballot & Taymaz [1995] have introduced classifier systems to treat the allocation of ressources by the firms between the different investments (incremental R&D, radical R&D, general training, specific training, fixed assets) in a more satisfactory way.Before, fixed parameters were used to allocate these ressources, and this unrealisticly stubborn behavior could cause many failures, and influence macroeconomic outcomes. We also introduced three decision types which firms could adopt (and change): simple (rule of thumb), complex (inspired by economic calculus), imitation. Classifiers were used to modify the parameters of the rules which constitute a decision type. Such a change in a parameter already requires rules of change which are generated by the classifier system. The limit of the model is then that the decision type is not generated by a classifier system of a higher order, but set by the model builder. Results, which are preliminary, show that learning by classifiers is compatible with successful coordination in the economy, and aggregate growth. Diversity of rules among firms seems a condition for success in the long run. It is achieved endogeneously, but if identical rules are imposed, the economy breaks down. A fundamental thesis of evolutionary theory is then validated. It is interesting that diversity is sustained in spite of the superiority of some rules, contrary to what would happen in a very competitive framework.Finally complex rules seem not to do as well as simple rules, an unexpected result. These results suggest that classifiers can be used to study structural change in macroeconomics, since it is easy to simulate various changes in incentives and different behavior assumptions and examine the (non) coordination of the economy. Some limits may come from the randomness in the generation of new rules. Intelligent agents improve their rules in a more systematic way. They know more than the performance of their rules. They understand something about the mechanism by which a rule gives this performance. Yet they understand much less than the true model of the world as is assumed by the dominant school of rational expectations. Selected bibliography Arthur W.B. [1988] "Competing technologies: an overview", in G.Dosi et al. (Eds) Technical change and economic theory. London: Pinter. Ballot G. [1981] March du travail et dynamique de la r partition des revenus salariaux. Th se d'Etat. Universit Paris-X, mimeo. Ballot G. [1988] Changements de r gulation dans un mod le du march du travail, in D.Kessler (ed), Ecomomie sociale. Paris: CNRS Press. Ballot G. [1987] Rapport sur la mod lisation microanalytique. Study for the Commissariat G n ral du Plan, Paris. Mimeo, 149 p. Ballot G.& Taymaz E. [1994] The Dynamics of Firms in a Micro-to-Macro Model of Evolutionary Growth, forthcoming in the Journal of Evolutionary economics. Ballot G. & Taymaz E. [1995] Firms, rules generation, and macroeconomic effets in an evolutionary model, 4th International conference on the Cognitive Foundations of Economics and Management, Ecole Normale Sup rieure, Cachan, France, September 6-8, mimeo. Ballot G & Taymaz E. [1996] Human Capital, Technological Lock-in and Evolutionary Dynamics. Mimeo Bennett R.L.& Bergmann B.R.[1986] A microsimulated Transactions Model of the United States Economy. Baltimore: The John Hopkins University. Eliasson G. (ed) [1978] A Micro-to-Macro Model of the Swedish Economy. IUI Conference report. Stockholm:Almquist & Wicksell. Eliasson G. [1991] Modelling the experimantally organized economy, Journal of economic Behavior and Organization 16, 163-182. Fair R. [1974] A model of macroeconomic activity. Vol 1: the theoretical model. Cambridge, Mass: Ballinger. Fair R. [1976] A model of macroeconomic activity. Vol 2: the empirical model. Cambridge, Mass:Ballinger. Harding A. (ed) [1996] Microsimulation and public policy. Amsterdam:North Holland. Holland J. [1975] Adaptation in natural and artificial systems. The Michigan Press. Holland J. [1976] "Adaptation", in R.Rosen and F.M. Snell (eds) Progress in Theoretical Biology IV;. New York: Academic Press. Langrognet E. & Merlateau M.-P. [1994] An evolutionary model with human capital. ERMES working paper 94-07 Nelson R.R. & Winter S.G. [1982] An evolutionary Theory of Economic Change. Cambridge, Mass: Belknap-Harvard University Press Orcutt et al. [1961] Microanalysis of socioeconomic systems: a simulation study. Harper Row. Orcutt et al. [1977] Policy exploration trough microanalytic simulation. Washington D.C.:the Urban institute. Sargent T. (1993) Bounded rationality in macroeconomics. Clarendon Press, Oxford Somit A. & Peterson S.A.(eds) [1989] The Dynamics of Evolution. The punctuated Equilibrium Debate in the Natural and Social Sciences. Ithaca: Cornell University Press. Taymaz E. [1993] Calibration algorithms for microanalytic models.ERMES working paper 93-06.