Jump to content

Strategy (game theory)

From Wikipedia, the free encyclopedia
(Redirected from Mixed Strategy)

In game theory, a move, action, or play is any one of the options which a player can choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others.[1] The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship.[2]

The term strategy is typically used to mean a complete algorithm for playing a game, telling a player what to do for every possible situation. A player's strategy determines the action the player will take at any stage of the game. However, the idea of a strategy is often confused or conflated with that of a move or action, because of the correspondence between moves and pure strategies in most games: for any move X, "always play move X" is an example of a valid strategy, and as a result every move can also be considered to be a strategy. Other authors treat strategies as being a different type of thing from actions, and therefore distinct.

It is helpful to think about a "strategy" as a list of directions, and a "move" as a single turn on the list of directions itself. This strategy is based on the payoff or outcome of each action. The goal of each agent is to consider their payoff based on a competitors action. For example, competitor A can assume competitor B enters the market. From there, Competitor A compares the payoffs they receive by entering and not entering. The next step is to assume Competitor B does not enter and then consider which payoff is better based on if Competitor A chooses to enter or not enter. This technique can identify dominant strategies where a player can identify an action that they can take no matter what the competitor does to try to maximize the payoff.

A strategy profile (sometimes called a strategy combination) is a set of strategies for all players which fully specifies all actions in a game. A strategy profile must include one and only one strategy for every player.

Strategy set

[edit]

A player's strategy set defines what strategies are available for them to play.

A player has a finite strategy set if they have a number of discrete strategies available to them. For instance, a game of rock paper scissors comprises a single move by each player—and each player's move is made without knowledge of the other's, not as a response—so each player has the finite strategy set {rock paper scissors}.

A strategy set is infinite otherwise. For instance the cake cutting game has a bounded continuum of strategies in the strategy set {Cut anywhere between zero percent and 100 percent of the cake}.

In a dynamic game, games that are played over a series of time, the strategy set consists of the possible rules a player could give to a robot or agent on how to play the game. For instance, in the ultimatum game, the strategy set for the second player would consist of every possible rule for which offers to accept and which to reject.

In a Bayesian game, or games in which players have incomplete information about one another, the strategy set is similar to that in a dynamic game. It consists of rules for what action to take for any possible private information.

Choosing a strategy set

[edit]

In applied game theory, the definition of the strategy sets is an important part of the art of making a game simultaneously solvable and meaningful. The game theorist can use knowledge of the overall problem, that is the friction between two or more players, to limit the strategy spaces, and ease the solution.

For instance, strictly speaking in the Ultimatum game a player can have strategies such as: Reject offers of ($1, $3, $5, ..., $19), accept offers of ($0, $2, $4, ..., $20). Including all such strategies makes for a very large strategy space and a somewhat difficult problem. A game theorist might instead believe they can limit the strategy set to: {Reject any offer ≤ x, accept any offer > x; for x in ($0, $1, $2, ..., $20)}.

Pure and mixed strategies

[edit]

A pure strategy provides a complete definition of how a player will play a game. Pure strategy can be thought about as a singular concrete plan subject to the observations they make during the course of the game of play. In particular, it determines the move a player will make for any situation they could face. A player's strategy set is the set of pure strategies available to that player.

A mixed strategy is an assignment of a probability to each pure strategy. When enlisting mixed strategy, it is often because the game does not allow for a rational description in specifying a pure strategy for the game. This allows for a player to randomly select a pure strategy. (See the following section for an illustration.) Since probabilities are continuous, there are infinitely many mixed strategies available to a player. Since probabilities are being assigned to strategies for a specific player when discussing the payoffs of certain scenarios the payoff must be referred to as "expected payoff".

Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular pure strategy is selected with probability 1 and every other strategy with probability 0.

A totally mixed strategy is a mixed strategy in which the player assigns a strictly positive probability to every pure strategy. (Totally mixed strategies are important for equilibrium refinement such as trembling hand perfect equilibrium.)

Mixed strategy

[edit]

Illustration

[edit]

In a soccer penalty kick, the kicker must choose whether to kick to the right or left side of the goal, and simultaneously the goalie must decide which way to block it. Also, the kicker has a direction they are best at shooting, which is left if they are right-footed. The matrix for the soccer game illustrates this situation, a simplified form of the game studied by Chiappori, Levitt, and Groseclose (2002).[3] It assumes that if the goalie guesses correctly, the kick is blocked, which is set to the base payoff of 0 for both players. If the goalie guesses wrong, the kick is more likely to go in if it is to the left (payoffs of +2 for the kicker and -2 for the goalie) than if it is to the right (the lower payoff of +1 to kicker and -1 to goalie).

Goalie
Lean Left Lean Right
Kicker Kick Left  0,  0 +2, -2
Kick Right +1, -1  0,  0
 Payoff for the Soccer Game (Kicker, Goalie)

This game has no pure-strategy equilibrium, because one player or the other would deviate from any profile of strategies—for example, (Left, Left) is not an equilibrium because the Kicker would deviate to Right and increase his payoff from 0 to 1.

The kicker's mixed-strategy equilibrium is found from the fact that they will deviate from randomizing unless their payoffs from Left Kick and Right Kick are exactly equal. If the goalie leans left with probability g, the kicker's expected payoff from Kick Left is g(0) + (1-g)(2), and from Kick Right is g(1) + (1-g)(0). Equating these yields g= 2/3. Similarly, the goalie is willing to randomize only if the kicker chooses mixed strategy probability k such that Lean Left's payoff of k(0) + (1-k)(-1) equals Lean Right's payoff of k(-2) + (1-k)(0), so k = 1/3. Thus, the mixed-strategy equilibrium is (Prob(Kick Left) = 1/3, Prob(Lean Left) = 2/3).

In equilibrium, the kicker kicks to their best side only 1/3 of the time. That is because the goalie is guarding that side more. Also, in equilibrium, the kicker is indifferent which way they kick, but for it to be an equilibrium they must choose exactly 1/3 probability.

Chiappori, Levitt, and Groseclose try to measure how important it is for the kicker to kick to their favored side, add center kicks, etc., and look at how professional players actually behave. They find that they do randomize, and that kickers kick to their favored side 45% of the time and goalies lean to that side 57% of the time. Their article is well-known as an example of how people in real life use mixed strategies.

Significance

[edit]

In his famous paper, John Forbes Nash proved that there is an equilibrium for every finite game. One can divide Nash equilibria into two types. Pure strategy Nash equilibria are Nash equilibria where all players are playing pure strategies. Mixed strategy Nash equilibria are equilibria where at least one player is playing a mixed strategy. While Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria. For an example of a game that does not have a Nash equilibrium in pure strategies, see Matching pennies. However, many games do have pure strategy Nash equilibria (e.g. the Coordination game, the Prisoner's dilemma, the Stag hunt). Further, games can have both pure strategy and mixed strategy equilibria. An easy example is the pure coordination game, where in addition to the pure strategies (A,A) and (B,B) a mixed equilibrium exists in which both players play either strategy with probability 1/2.

Interpretations of mixed strategies

[edit]

During the 1980s, the concept of mixed strategies came under heavy fire for being "intuitively problematic", since they are weak Nash equilibria, and a player is indifferent about whether to follow their equilibrium strategy probability or deviate to some other probability.[4] [5] Game theorist Ariel Rubinstein describes alternative ways of understanding the concept. The first, due to Harsanyi (1973),[6] is called purification, and supposes that the mixed strategies interpretation merely reflects our lack of knowledge of the players' information and decision-making process. Apparently random choices are then seen as consequences of non-specified, payoff-irrelevant exogenous factors.[5] A second interpretation imagines the game players standing for a large population of agents. Each of the agents chooses a pure strategy, and the payoff depends on the fraction of agents choosing each strategy. The mixed strategy hence represents the distribution of pure strategies chosen by each population. However, this does not provide any justification for the case when players are individual agents.

Later, Aumann and Brandenburger (1995),[7] re-interpreted Nash equilibrium as an equilibrium in beliefs, rather than actions. For instance, in rock paper scissors an equilibrium in beliefs would have each player believing the other was equally likely to play each strategy. This interpretation weakens the descriptive power of Nash equilibrium, however, since it is possible in such an equilibrium for each player to actually play a pure strategy of Rock in each play of the game, even though over time the probabilities are those of the mixed strategy.

Behavior strategy

[edit]

While a mixed strategy assigns a probability distribution over pure strategies, a behavior strategy assigns at each information set a probability distribution over the set of possible actions. While the two concepts are very closely related in the context of normal form games, they have very different implications for extensive form games. Roughly, a mixed strategy randomly chooses a deterministic path through the game tree, while a behavior strategy can be seen as a stochastic path. The relationship between mixed and behavior strategies is the subject of Kuhn's theorem, a behavioral outlook on traditional game-theoretic hypotheses. The result establishes that in any finite extensive-form game with perfect recall, for any player and any mixed strategy, there exists a behavior strategy that, against all profiles of strategies (of other players), induces the same distribution over terminal nodes as the mixed strategy does. The converse is also true.

A famous example of why perfect recall is required for the equivalence is given by Piccione and Rubinstein (1997)[full citation needed] with their Absent-Minded Driver game.

Outcome equivalence

[edit]

Outcome equivalence combines the mixed and behavioral strategy of Player i in relation to the pure strategy of Player i’s opponent. Outcome equivalence is defined as the situation in which, for any mixed and behavioral strategy that Player i takes, in response to any pure strategy that Player I’s opponent plays, the outcome distribution of the mixed and behavioral strategy must be equal. This equivalence can be described by the following formula: (Q^(U(i), S(-i)))(z) = (Q^(β(i), S(-i)))(z), where U(i) describes Player i's mixed strategy, β(i) describes Player i's behavioral strategy, and S(-i) is the opponent's strategy.[8]

Strategy with perfect recall

[edit]

Perfect recall is defined as the ability of every player in game to remember and recall all past actions within the game. Perfect recall is required for equivalence as, in finite games with imperfect recall, there will be existing mixed strategies of Player I in which there is no equivalent behavior strategy. This is fully described in the Absent-Minded Driver game formulated by Piccione and Rubinstein. In short, this game is based on the decision-making of a driver with imperfect recall, who needs to take the second exit off the highway to reach home but does not remember which intersection they are at when they reach it. Figure [2] describes this game.

Without perfect information (i.e. imperfect information), players make a choice at each decision node without knowledge of the decisions that have preceded it. Therefore, a player’s mixed strategy can produce outcomes that their behavioral strategy cannot, and vice versa. This is demonstrated in the Absent-minded Driver game. With perfect recall and information, the driver has a single pure strategy, which is [continue, exit], as the driver is aware of what intersection (or decision node) they are at when they arrive to it. On the other hand, looking at the planning-optimal stage only, the maximum payoff is achieved by continuing at both intersections, maximized at p=2/3 (reference). This simple one player game demonstrates the importance of perfect recall for outcome equivalence, and its impact on normal and extended form games.[9]

See also

[edit]

References

[edit]
  1. ^ Ben Polak Game Theory: Lecture 1 Transcript ECON 159, 5 September 2007, Open Yale Courses.
  2. ^ Aumann, R. (22 March 2017). Game Theory. In: Palgrave Macmillan. London: Palgrave Macmillan. ISBN 978-1-349-95121-5.
  3. ^ Chiappori, P. -A.; Levitt, S.; Groseclose, T. (2002). "Testing Mixed-Strategy Equilibria when Players Are Heterogeneous: The Case of Penalty Kicks in Soccer" (PDF). American Economic Review. 92 (4): 1138. CiteSeerX 10.1.1.178.1646. doi:10.1257/00028280260344678.
  4. ^ Aumann, R. (1985). "What is Game Theory Trying to accomplish?" (PDF). In Arrow, K.; Honkapohja, S. (eds.). Frontiers of Economics. Oxford: Basil Blackwell. pp. 909–924.
  5. ^ a b Rubinstein, A. (1991). "Comments on the interpretation of Game Theory". Econometrica. 59 (4): 909–924. doi:10.2307/2938166. JSTOR 2938166.
  6. ^ Harsanyi, John (1973). "Games with randomly disturbed payoffs: a new rationale for mixed-strategy equilibrium points". Int. J. Game Theory. 2: 1–23. doi:10.1007/BF01737554. S2CID 154484458.
  7. ^ Aumann, Robert; Brandenburger, Adam (1995). "Epistemic Conditions for Nash Equilibrium". Econometrica. 63 (5): 1161–1180. CiteSeerX 10.1.1.122.5816. doi:10.2307/2171725. JSTOR 2171725.
  8. ^ Shimoji, Makoto (2012-05-01). "Outcome-equivalence of self-confirming equilibrium and Nash equilibrium". Games and Economic Behavior. 75 (1): 441–447. doi:10.1016/j.geb.2011.09.010. ISSN 0899-8256.
  9. ^ Kak, Subhash (2017). "The Absent-Minded Driver Problem Redux". arXiv:1702.05778 [cs.AI].