A more general model of cooperation based on reinforcement learning: Alignment and Integration of the Bush-Mosteller and the Roth-Erev model
By: Andreas
Flache and Michael W. Macy
Date: 31st March/1st April 2003
CPM Report No.: CPM-03-116
Analytical game theory has developed the Nash equilibrium as theoretical tool for the analysis of cooperation and conflicts in interdependent decision making. Indeterminacy and demanding rationality assumptions of the Nash equilibrium have led cognitive game theorists to explore learning-theoretic models of behavior. Two prominent examples are the Bush-Mosteller stochastic learning model and the Roth-Erev payoff-matching model. We align and integrate the two models as special cases of a General Reinforcement Learning Model. Both models predict stochastic collusion as a backward-looking solution to the problem of cooperation in social dilemmas, based on a random walk into a self-reinforcing cooperative equilibrium. The integration also uncovers hidden assumptions that constrain the generality of the theoretical derivations. Specifically, Roth and Erev assume a “Power Law of Learning” - the curious but plausible tendency for learning to diminish with success and intensify with failure, which we call “fixation.” We use computer simulation to explore the effects of fixation on stochastic collusion in three social dilemma games. The analysis shows how the integration of alternative models can uncover underlying principles and lead to a more general theory.
Accessible as: