Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Evolutionarily stable strategy
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Prisoner's dilemma == {{Payoff matrix | Name = Prisoner's Dilemma | 2L = Cooperate | 2R = Defect | 1U = Cooperate | UL = 3, 3 | UR = 1, 4 | 1D = Defect | DL = 4, 1 | DR = 2, 2 }} A common model of [[altruism]] and social cooperation is the [[Prisoner's dilemma]]. Here a group of players would collectively be better off if they could play ''Cooperate'', but since ''Defect'' fares better each individual player has an incentive to play ''Defect''. One solution to this problem is to introduce the possibility of retaliation by having individuals play the game repeatedly against the same player. In the so-called ''[[repeated game|iterated]]'' Prisoner's dilemma, the same two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies (''Cooperate'' and ''Defect''), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual can have different contingency plan for each history and the game may be repeated an indefinite number of times, there may in fact be an infinite number of such contingency plans. Three simple contingency plans which have received substantial attention are ''Always Defect'', ''Always Cooperate'', and ''[[Tit for Tat]]''. The first two strategies do the same thing regardless of the other player's actions, while the latter responds on the next round by doing what was done to it on the previous roundโit responds to ''Cooperate'' with ''Cooperate'' and ''Defect'' with ''Defect''. If the entire population plays ''Tit-for-Tat'' and a mutant arises who plays ''Always Defect'', ''Tit-for-Tat'' will outperform ''Always Defect''. If the population of the mutant becomes too large โ the percentage of the mutant will be kept small. ''Tit for Tat'' is therefore an ESS, ''with respect to '''only''' these two strategies''. On the other hand, an island of ''Always Defect'' players will be stable against the invasion of a few ''Tit-for-Tat'' players, but not against a large number of them.<ref>{{cite book |author=Axelrod, Robert |author-link=Robert Axelrod (political scientist) |title=The Evolution of Cooperation |year=1984 |isbn=0-465-02121-2 |title-link=The Evolution of Cooperation |publisher=Basic Books }}</ref> If we introduce ''Always Cooperate'', a population of ''Tit-for-Tat'' is no longer an ESS. Since a population of ''Tit-for-Tat'' players always cooperates, the strategy ''Always Cooperate'' behaves identically in this population. As a result, a mutant who plays ''Always Cooperate'' will not be eliminated. However, even though a population of ''Always Cooperate'' and ''Tit-for-Tat'' can coexist, if there is a small percentage of the population that is ''Always Defect'', the selective pressure is against ''Always Cooperate'', and in favour of ''Tit-for-Tat''. This is due to the lower payoffs of cooperating than those of defecting in case the opponent defects. This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces, and has motivated some to consider alternatives.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)