Abstract
We consider learning in games that are misspecified in that players are unable to learn the true probability distribution over outcomes. Under misspecification, Bayes rule might not converge to the model that leads to actions with the highest objective payoff among the models subjectively admitted by the player. From an evolutionary perspective, this renders a population of Bayesians vulnerable to invasion. Drawing on the machine learning literature, we show that learning rules that outperform Bayes’ rule suggest a new solution concept for misspecified games: misspecified Nash equilibrium.
Original language | English |
---|---|
Publication status | Published - 2020 |