This is the story of how I came to write a little paper called The Coinflipper’s Dilemma.

When I was in high school, my English teacher must have had a free period at the time when my math class met, because every day he would march into the math class and empty his pockets on the table, whereupon my math teacher did the same. Then whoever had put down the most money scooped up everything on the table.

I am ashamed to admit that it took me until this summer to think about computing the equilibrium strategy is in that game.

The problem turns out to be more interesting if we impose a minimum bet of, say, $1. Given that, I briefly convinced myself that there’s an equilibrium where each player chooses a bet between $1 and infinity, subject to the probability density (1/2)x^{-3/2}. (In layman’s terms, this means roughly that the probability you’ll bet some amount x is proportional to (1/2)x^{-3/2}).

It turns out that if the math teacher plays this strategy, then the English teacher, no matter how much he bets, earns an expected return (that is, a return on the average day) of $-1. Therefore, for the English teacher (assuming he’s required to play), one strategy is as good as another, and he might as well play this one. Ditto for the math teacher. That, I briefly thought, sufficed (essentially by definition) to make this an equilibrium outcome.

It turns out that for technical reasons, that’s not quite right. And it turns out also that I was not the first person to stumble in this way, or the first to recover from it. Just a few years earlier, Michael Baye, Dan Kovenock and Casper de Vries had run into exactly the same issue in the context of a slightly more complicated game. Their paper cleared up whatever remaining confusion I had about the equilibrium issue.

But I was still left to wonder what advice I should have given to my old English teacher. Should he stick to the strategy described above? It’s true, as I said, that as long as the math teacher sticks to this strategy, the English teacher can bet any amount he wants and expect to lose $1, so this strategy seems as good as any other — though it would be better still to avoid the game altogether. But at the same time, if the English teacher sticks to this strategy, then no matter what amouont the math teacher bets, the English teacher can expect to **win** $1 — so maybe it’s a good idea to play after all.

Multiple conversations with some excellent economists convinced me that the resolution of this paradox is not completely obvious, even to (at least some) experts in the field, and that it might be useful to write up an explanation of what’s really going on. In the course of writing, I discovered that it’s a lot easier to highlight the key issues with a **different** game, which I called the **Coinflipper’s Dilemma** and I wrote a little paper about it.

My math and English teachers have vanished entirely from the final draft, but their dilemma is not fundamentally different from the coinflipper’s, and if you understand one, it’s not hard to understand the other.

When all is said and done, does this sound like a game you’d like to play?

**Edited to add:** An earlier version of this post was missing a crucial minus sign; it’s now edited to get it right. Thanks to our diligent reader nivedita for catching this.

Alternatively:

I don’t follow much of this, but I start with the perspective of Economist 3 (“…on the third hand….”): Any symmetrical zero-sum game must yield an expected (financial) value of 0. If anyone has an alternative theory, please advise.

This reminds me of the envelope paradox. And there, as well as here, sny argument which gives an asymmetric answer to a clearly symmetric situation must be wrong – although identifying the source of the error might be tricky!

On first viewing economist 3 seems intuitively correct, as nobody.really comments above.

Why is the reward an exponential? This causes the maximum win or loss to very rapidly grow to huge sums. Is it to keep the math simple, and would the same conclusions apply if the win was say n rather than 4^n? This is like your English teachers dilemma.

Working through some examples. If Bob gets a low score, your chances if winning are high, but the reward is low. At the extreme, if Bob gets T, then you have a 50% chance of winning nothing, and a 50% chance of winning 1: an expected return of ½. (However, you only get 1 if the winnings are x^n, if you just get n then there are no heads, so no winnings.) If Bob gets HT, you could get T, HT, HH: 50% chance of –1, 25% chance of 0 or 4 – expected outcome ½ again, I think. I won’t go further. Obviously, if you go first, the same things apply to Bob. A strange situation indeed.

Can you explain epsilon, M and N from references and what the expressions mean?

Looking at the graphs in the appendix reveals how odd the outcome is. I would usually expect the points to cluster more and more closely as the number of plays increases, but the opposite is the case. The chance of extreme outliers is greater for more plays, but does seem to cluster back round zero for very long games. Is this a “real” outcome, i.e. for very long games the chances of outliers is reduced?

“It’s true, as I said, that as long as the math teacher sticks to this strategy, the English teacher can bet any amount he wants and expect to win $1, so this strategy seems as good as any other.”

Thte part starting with “so” seems to be a non-sequitor to me. Another strategy (not strictly allowed) with that feature would be to sneek a glance into the Math teacher’s pocket, and put onto the table exactly $1 less than the amount of money the Math teacher was going to put onto table (ignore the 0 bound etc.). For obvious reasons a bad strategy.

Ignore my example, it is wrong. I read/wrote without thinking. However statement still seems non-sequitor.

I don’t understand the teacher strategy.. surely if the math teacher plays this strategy, and the English teacher always bets $1, the English teacher always loses, and decidedly does not win $1 in expectation.

nivedita:

You’re right; I should have said that if the math teacher plays this strategy, then the English teacher’s expected gain is always *minus* 1, no matter what she plays. Therefore (assuming she is not offered the option of not playing), she might as well play this strategy as any other.

Sorry for the missing minus sign. I will fix the post by the end of tomorrow.

Edited to add:It’s fixed now. Thanksverymuch for catching this.nivedita: I now see that the minus-sign issue also affects a later paragaph, which I’ve also now fixed. Thanks again.

Hey Steve,

Just to check my intuition here: If there were an upper bound placed on how much one could win/lose in the game, then would the expected payoff in dollar terms be $0 for both players, and the expected utility payoff be negative for risk-averse players?