### Playing Games

Here are solutions to the two game theory problems from my honors exam:

Question 7. Jack and Jill play a game. First, each flips a coin. After seeing their own coins (but not each others’), each player (separately) says either “Red” or “Black”. If they name opposite colors, then the Black-sayer gets \$4 and the Red-sayer gets nothing. If both say Black, then they both get either \$5 (if both flipped heads) or \$10 (otherwise). If they both say Red, then they both get either nothing (if both flipped heads) or \$20 (otherwise). Assume both players play optimally. If Jack flips heads, what is the probability that he says “Black”? What if Jack flips tails?

It’s been pointed out in comments (thanks, Ron!) that there is an equilibrium where both players always say “Red” without looking at their flips. The problem as stated on the exam asks for a solution where Jack plays both red and black with nonzero probabilities.

Solution: When Jack sees heads, he plays red with probability 1/5. When he sees tails, he plays red with probability 37/65. Likewise for Jill.

Explanation: If Jack plays both red and black with nonzero probability, then he must be equally happy playing either red or black (otherwise he’d play only the one he prefers). Conversely, as long as he’s equally happy playing either red or black, he has no reason to deviate from this strategy. So we need to show that Jack is equally happy playing either red or black. (The symmetry of the problem guarantees that the same is true for Jill.)

So assuming Jill plays the strategy above, let’s look at things from Jack’s point of view.

First, suppose Jack flips heads and plays red. Then one of four things happens:

• Jill flips heads (probability 1/2) and then plays red (probability 1/5). This gives a 1/10 chance that Jack wins zero.
• Jill flips heads (probability 1/2) and then plays black (probabilty 4/5). This gives a 2/5 chance that Jack wins zero.
• Jill flips tails (probability 1/2) and then plays red (probability 37/65). This gives a 37/130 chance that Jack wins \$20.
• Jill flips tails (probability 1/2) and then plays black (probability 28/65). This gives a 14/65 chance that Jack wins 0.

Jack’s expected winnings: (37/130) x 20 = 74/13.

Now suppose instead that Jack flips heads and plays black. Then one of four things happens:

• Jill flips heads (probability 1/2) and then plays red (probability 1/5). This gives a 1/10 chance that Jack wins \$4.
• Jill flips heads (probability 1/2) and then plays black (probabilty 4/5). This gives a 2/5 chance that Jack wins \$5.
• Jill flips tails (probability 1/2) and then plays red (probability 37/65). This gives a 37/130 chance that Jack wins \$4.
• Jill flips tails (probability 1/2) and then plays black (probability 28/65). This gives a 14/65 chance that Jack wins \$10.

Jack’s expected winnings: (1/10) x 4 + (2/5) x 5 + (37/130) x 4 + (14/65) x 10 = 74/13.

So Jack, having flipped heads, is indeed equally happy playing red or black.

When Jack flips tails, similar calculations show that his expected winnings are:

If he says red, (1/10) x 20 + (2/5) x 0 + (37/130) x 20 + (14/65) x 0 = 100/13
If he says black, (1/10) x 4 + (2/5) x 10 + (37/130) x 4 + (14/65) x 10 = 100/13

So once again, he’s equally happy playing either red or black and has no reason to deviate from the given strategy.

Question 8: The five Dukes of Earl are scheduled to arrive at the royal palace on each of the first five days of May. Duke One is scheduled to arrive on the first day of May, Duke Two on the second, etc. Each Duke, upon arrival, can either kill the king or support the king. If he kills the king, he takes the king’s place, becomes the new king, and awaits the next Duke’s arrival. If he supports the king, all subsequent Dukes cancel their visits. A Duke’s first priority is to remain alive, and his second priority is to become king. Who is king on May 6?

Solution: If things go so far as the arrival of the fifth Duke, he can kill the fourth Duke with impunity and therefore will. The fourth Duke, recognizing this, would never consent to be king, so (if things get that far) he will support the third Duke. The third Duke, foreseeing this, will (if things get that far) happily kill the second Duke and take over; the second Duke, recognizing this, will support the first Duke; the first Duke, recognizing this, will kill the king and take over. Thus the first Duke kills the king, the second Duke supports him, and the first Duke is king on May 6.

Snorri Godhi was the first to solve this in comments; Daniel Lee and firebus also had it. The pirate problem proposed by Mike in comments is one I often assign in class. Matt Wampler-Doty, who occasionally comments here, once blew me away by finding an alternative solution to that problem that I’d completely failed to contemplate.

#### 19 Responses to “Playing Games”

1. 1 1 Snorri Godhi

WRT question 7: I was going to ask whether it is an iterated game, which it apparently is; but I could not have solved it anyway, so I did not ask.

WRT the pirate problem: I wanted to answer Mike, but I forgot. My solution is that the head pirate takes a piece of gold for himself and hands a piece of gold each to all pirates with odd numbers in the hierarchy: the 3rd, 5th, …, 99th pirate. No alternative solution comes to mind, unfortunately. I suppose that the alternative proposed by Wampler-Doty is not obtained by induction.

2. 2 2 Ron

I can think of an alternate strategy for the pirate problem,
but I’m not sure that it matches what Wampler-Doty came up with,
as I have no information about what it was.
The pirate king proposes taking 50% of the booty and having a
random drawing where the winner gets the remaining 50%. It has a
good chance of working and satisfying the crew. After all, many
state lotteries work essentially this way.

Also, on the question 7 answer, I’m not comfortable with a
solution that’s so sub-optimal. If the requirement is that Jack
play black and red with nonzero probablilities, the Jack (and
Jill) should call Red 999999 out of 1000000 times. Their joint
payouts will still dwarf the “optimum” strategy. The problem did
not specify that Jack had to be equally happy playing either
color. He wins bigger if he is not equally happy.

3. 3 3 Jim Wesnor

OK – you lost me in the explanation. If Jack flips heads and plays red, Jill flips heads and then plays red, this gives a 1/10 chance that Jack wins zero. Since we don’t have “losses” or negative winnings, doesn’t this mean Jack has a 9/10 chance of winning something greater than 0. If so, why wasn’t this included in the expected value calculation.

If both are playing optimally, wouldn’t the probability that Jack plays black be 0 %, since the optimal solution would be red?

4. 4 4 JoshTU

Hi, came to this blog because of the econ questions. I’m annoyed that I have to jump back and forth between a couple of posts to get the solutions as well as annoyed that I have to wait so long to see them all. If you were going to do it this way you should have broken it up to 20 separate posts. with 1 question and answer released within a 24 hours of each other.

Love the questions, hate the wait

5. 5 5 Steve Landsburg

Ron: The requirement that Jack be equally happy playing either red or black is not an assumption of the problem; it”s part of what you’re supposed to figure out.

If Jill plays the strategy I recommended, and Jack calls red 999999 out of 10000000 times, then Jill will switch to playing red all the time, whereupon Jack switches to playing red all the time. So your 9999999 out of 10000000 is not equilibrium behavior.

As to your observation about joint payoffs—nobody is interested in joint payoffs. Each player optimizes his/her own payoff, taking the other player’s behavior as given.

6. 6 6 Ron

Steve: Am I missing something?

It’s a symmetrical non-zero-sum game with an indefinite number of
rounds. Under those circumstances, does not maximizing joint
payoffs also necessarily maximize individual payoffs?

If a set of artificial constraints prevents the players from using
the optimum strategy, will not the players, in their own
self-interests deviate as little as possible from the optimum in
order to satisfy those constraints?

If I were playing this game, I’d certainly be happier with a return
per round of \$15 less a fudge factor (.75 * 20) than I would be with
6.69 per round (174/26).

It’s in each player’s interest to deviate from the red-red strategy
exactly as seldom as permissible to meet the constraints of the game
and avoid having the game cancelled.

7. 7 7 Steve Landsburg

Ron: You are confusing individual optimization with joint optimization; there is no reason the two should coincide. The most famous counterexample is the Prisoner’s Dilemma. Or, take the example I used in More Sex is Safer Sex: Walk around my neigborhood on a crisp fall Saturday morning, and on every lawn you will see a man with a leaf blower, blowing leaves onto the next man’s lawn. This behavior is individually optimal but collectively disastrous.

If individually optimal behavior were always collectively optimal, the world would be a much better place.

8. 8 8 Arcane Sentiment

The second derivative of expected score is negative wrt both P(red|head) and P(red|tail), so P(red|head)=1/5, P(red|tail)=37/65 is a minimum, not a maximum; its expected score of 87/13 ~ 6.69 is the worst they can do. Always playing red is indeed the best noncooperative strategy, with an expected score of 15.

However, the players can do even better by revealing their coins before calling a color. If both have heads, both will play black; otherwise both will play red, for an expected score of 65/4 ~16.25. Since in this particular game the best result for either player is always best for the the other as well, neither player ever has an incentive to defect. (If communication is forbidden by the rules, the players can often communicate covertly, e.g. by being slower to call a color when they got a head, or by noticing that the other player looks happier when they get a tail. Allowing covert communication does not make the game a less useful model, because it’s possible in most real games.)

9. 9 9 Steve Landsburg

Arcane Sentiment: I am not sure what second derivative you are calculating. Are you perhaps assuming that Jack and Jill play identical strategies, and then calculating payoff as a function of the two components of their shared strategy? If so, I am not why you think this calculation is relevant. Jack and Jill choose their strategies separately. On the other hand, perhaps you are assuming (more relevantly) that the payoff is a function of four variables—Jack’s two probabilities and Jill’s two probabilities. But in that case the payoff function is linear in each variable separately and the (unmixed) second derivatives are all zero.

10. 10 10 Arcane Sentiment

Yes, I was assuming identical probabilities for the two players, i.e. assuming a symmetric equilibrium. In that case the mixed-strategy equilibrium at P(red|head)=1/5, P(red|tail)=37/65 exists if both players are trying to minimize their scores, but not if they’re trying to maximize them. In that case there are only pure-strategy equilibria.

If we treat the players’ probabilities separately, many of the mixed second derivatives are positive, so a player who deviates from the minimum-score equilibrium gives the other player an incentive to deviate also. (It’s an equilibrium only in the sense of indifference, not in the sense of stability.) If Jill uses those probabilities, then Jack is only indifferent between colors until Jill notices she can do better. If he switches to e.g. always playing red, then he’s no worse off if Jill continues her strategy, and better off if she notices his predictability and exploits it. Only if Jill is trying to minimize her score should she stay at P(red|head)=1/5, P(red|tail)=37/65.

Actually, maybe that’s what you intend? The economese version of the problem isn’t clear about which direction is better, although the translated version seems to obviously intend maximizing.

11. 11 11 Steve Landsburg

Arcane Sentiment:

In that case the mixed-strategy equilibrium at P(red|head)=1/5, P(red|tail)=37/65 exists if both players are trying to minimize their scores, but not if they’re trying to maximize them. In that case there are only pure-strategy equilibria.

This is assuredly false. Taking Jill’s strategy as given, Jack’s payoff is independent of his strategy, so choosing this strategy both minimizes and maximizes his payoff.

12. 12 12 Arcane Sentiment

It’s an unstable weak Nash equilibrium, so while it technically satisfies the original version of the problem, which asks for “an equilibrium”, it doesn’t satisfy the translated version, which asks what Jack will play.

13. 13 13 Bennett Haselton

I still think Ron is right here. At least, I can’t understand why he would be wrong.

If the strategy of both players playing red 100% of the time, leaves both Jack and Jill better off then your suggested mixed probability strategy, then both of them will play red 100% of the time. If they are disallowed from playing red 100% of the time, then they’ll both play red 99% of the time, to get as close as possible to the optimum, because the optimum in this case is also the optimum for each of them *individually*. If you don’t think they would do this, why not?

You said, “If Jill plays the strategy I recommended, and Jack calls red 999999 out of 10000000 times, then Jill will switch to playing red all the time, whereupon Jack switches to playing red all the time.” Well yes, that’s what they’ll do if the rules of the game permit them to play red 100% of the time. And if that’s not permitted, then they’ll both keep playing red 99% of the time.

14. 14 14 Steve Landsburg

Bennett:

Oops. I’m fixing the title. As for the rest: The rules of the game allow Jack and Jill to play whatever they want.

15. 15 15 Bennett Haselton

OK — but then if they’re allowed to play whatever they want, then they should both play red 100% of the time! Shouldn’t they?

If they both follow this strategy, suppose Jack flips heads and calls red. If Jill flips heads and calls red, he gets nothing; if Jill flips tails and calls red, he gets \$20, so his expected gain is \$10. This is better than the 74/13 that he gets under your strategy when he flips heads.

And if Jack flips tails and calls red, then whether Jill flips heads or tails, if she calls red too, Jack gets \$20, so his expected gain is \$20, also better than the 100/13 that he gets under your strategy if he flips tails.

So both of them are better off if they call red all the time, than if they follow your mixed probability strategy. So they’ll call red 100% of the time if that’s allowed, and 99.99% of the time if it’s not. Wouldn’t they? If not, why not?

You said Ron pointed out that there is “an equilibrium” when they both play red, but I think it’s more than that — it’s not just that there’s *an* equilibrium, it’s that that is the *best* equilibrium for both players.

16. 16 16 Steve Landsburg

Bennett:

The problem specifies that Adam and Eve can play whatever they want, and asks for an equilibrium in which they choose probabilities other than 100%.

The fact that they’d both be better off in one situation than another says nothing about which situations are equilibria. Think prisoner’s dilemma.

17. 17 17 Bennett Haselton

Ah OK. In that case I’d say it did lose a lot in the translation since the blog version asked “Assume both players play optimally” :) I would interpret that to mean to find *the* optimum equilibrium, and then if that equilibrium is disallowed (players not allowed to play red 100% of the time), then play as close as possible to that equilibrium — not, find an equilibrium somewhere else :)
(I don’t think you ever posted questions 6-10 in the “original economese”, did you? http://www.thebigquestions.com/oberlin1.pdf only contains the first half.)

18. 18 18 Steve Landsburg

Bennett: I do not agree that this is a mis-translation. To play optimally means to play optimally, taking the other player’s strategy as given.