Skip to content

Economic Game Theory

In the public good game, players can contribute nothing, everything, or something in between to the public good. When you put people in a room and get them to play the public good game for real money, you therefore get an extremely rich quantitative data set on their behavior.

This data has been hard to understand using traditional economic models. Strict economic rationality predicts that no cooperation takes place, but more realistic models also struggle to explain all aspects of the data. For example, when the game is repeated multiple times, levels of cooperation fall, suggesting that learning is taking place. However, if the game is repeated for a larger number of rounds, cooperation takes place more slowly: it is hard to believe that people learn more slowly just because they are given more time. Moreover, if the game is “restarted” at the end, levels of cooperation jump back up.

Reciprocity or “tit-for-tat” is another mooted explanation for cooperation. This can be eliminated from the experimental design using a “strangers” treatment, in which players encounter different opponents each time. Cooperation does not fall dramatically in the way that the reciprocity hypothesis would predict.

One of the most interesting observations of human cooperation was in the one-shot Prisoner’s dilemma. Shafir and Tversky (1992) found that 3% of subjects cooperate when told in advance that their partner had defected: this presumably represents the “noise” or “error” rate. 16% cooperate when told their partner has cooperated, showing some reciprocity. Strikingly, 37% cooperate when they are not told their partner’s choice. According to standard expected utility, players know that their partner will either cooperation or defect, so the cooperation rate should be somewhere between 3% and 16%. It clearly isn’t: instead players are acting as though they could influence their partner’s actions, even though they knew that they couldn’t. Shafir and Tversky (1992) called this “quasi-magical thinking”.

I developed a novel mathematical model to explain this high observed cooperation on 37%, as well as other puzzling quantitative data on the public good game. It is based on the fact that humans find it very difficult to tell the difference between correlation and causation. Through quasi-magical thinking, they treat every correlation as though it were causal, even when they know this is not true.

There is, however, a genuine correlation between what I myself would do and what somebody else is likely to do. After all, I am just one more data point drawn from the same population. I incorporated this into a Bayesian learning model where one’s own hypothesized choice is treated in the same way as any other observed data point. A player might consider defecting, but then ask themselves “what if everybody else thought like that?” and decide to cooperate instead. This thought process was formally modeled as maximizing conditional expected utility. My model was extremely successful in explaining a broad range of data that had previously been puzzling

Publications:

  • Masel, J. (2007). A Bayesian model of quasi-magical thinking can explain observed co-operation in the public good game. J. Econ. Behav. Organ., 64, 216-231. (PDF) (doi)Go to document