Tuesday Puzzle

Here I have a well shuffled deck of 52 cards, half of them red and half of them black. I plan to slowly turn the cards face up, one at a time. You can raise your hand at any point — either just before I turn over the first card, or the second, or the third, et cetera. When you raise your hand, you win a prize if the next card I turn over is red.

What’s your strategy?

Click here to comment or read others’ comments.

Print Friendly, PDF & Email
Share

143 Responses to “Tuesday Puzzle”


  1. 1 1 Jens Fiederer

    You’ll only allow a single run through the deck?

    In other words, if I don’t raise my hand early enough I get no prize?

    It seems it should be pretty random….I have 50% chance on the 1st card, and it won’t get much better. If I bide my time, I risk missing red cards just as much as black cards being eliminated from the pool.

    Let the first card pass, and if it was black (50% of the time) I now face odds of 26/51 … a bit better than 50%, but if it was red I face odds of 25/51 … a bit worse.

  2. 2 2 Ramprasad

    After the hand is raised, you might as well just pull the last card of the deck as these two events are statistically identical. Of course one cannot get better odds than 50-50 on the last card, so one cannot get better odds with any strategy.

  3. 3 3 Ben

    Start with a smaller case. If you have 4 cards, you see the first one. If it’s black, you now have a 2/3 chance of the next one being red, so you take it. If the first is red, then you watch the second one too. If the second is black (2/3), you now have a 1/2 chance, if the second is also red, you lose. Your expected value is 2/3 with this strategy (1/2*2/3+2/3*1/2+1/3*0).

    Still working on whether it’s better to go for it as soon as you get favorable odds.

  4. 4 4 Ian

    My strategy is to write a Python script. And that script tells me it doesn’t matter what your strategy is, as long as you raise your hand in the case where there is one red card left and no black cards left.

    I still need to think about why…

  5. 5 5 James Pinkerton

    All strategies are the same. We prove this using induction. Let r represent the number of reds left and b the number of blacks.

    Let w(r,b) be the probability of winning if we wait for at least one more turn and then employ the best strategy (unless we are at the last card, then we don’t wait and just flip).

    Let f(r,b) be shorthand for the probability of winning if we just flip. f(r,b) = r/(r+b).

    Claim: f(r,b) = w(r,b), for all r, b in N.

    Proof (using induction):

    Base Case: Clearly w(n,0) = f(n,0) = 1, w(0,n) = f(0,n) = 0 for all n in N.

    Inductive Step: Now suppose f(r-1,b) = w(r-1,b) and f(r,b-1) = w(r,b-1).

    Then w(r,b) = r/(r+b)*f(r-1,b)+b/(r+b)*f(r,b-1) = r/(r+b)*(r-1)/(r+b-1)+b/(r+b)*r/(r+b-1) = r(r+b-1)/((r+b)(r+b-1)) = r/(r+b) = f(r,b).

    So f(r,b) = w(r,b), for all r, b in N

  6. 6 6 Martin Gale

    I used to gamble on things like that and I thought I had a strategy for winning more than 50% of the time, but looking back, it was a fallacy.

    Now I just raise my hand right away then I can go have a beer (unfiltered, of course)

  7. 7 7 Advo

    I don’t see a strategy that could improve on 50-50. I agree with #1.

    Of course if it was that easy, SL wouldn’t have posted it, though.

  8. 8 8 Jack

    Well obviously the longer you wait the more info you get. But at some point more info doesn’t necessarily help….He says he’s going slowly so you could simply check off and keep count of what has happened on each turn. It seems like the optimal time to raise your hand would depend on what has happened, but I’m not smart enough to really go beyond that…

  9. 9 9 Jack

    As far as my strategy… I think I would wait until I saw 3 black cards in a row then raise my hand.

  10. 10 10 Demosthenes

    Raise your hand before the first card is drawn, when you know that (assuming a fair deck) your chances of winning are one in two. That’s my initial thought — why wait for more information when it is as likely to hurt you as help you with each flip of a new card? — so I’m almost certain I’m wrong.

  11. 11 11 Harold

    If you were to pick randomly it would be a 0.5 probability. As said in #1, if the first card is black, you have a slightly higher chance, but surely just as likely is that you would have a lower chance if it is red. However, if we say call the second card IF the first is black, we then have 50% chance of improving our odds equally balanced by a 50% chance of reducing them.

    So what can we do if the first card is red?

    Another approach – instead of 52 cards, lets count up from a low number – even numbers only.
    We have two cards. These can either be RB or BR. Call on the first card and we have 0.5 probability. If we call on the second card, we have a 0.5 probability of certainty (if it is BR) and a 0.5 probability of odds 0 (if it is RB). Still 0.5 overall. I don’t see how to improve it.
    Four cards. Either 1)RRBB, 2)BBRR, 3)RBRB, 4)BRBR. If we call on the first card, we have 50% chance. If we call on the second, we have a 0.5 probability that we will have improved the odds to 0.666 if it is 2 or 4. However, we also have a 0.5 probability that we will have worsened our odds to 0.333. I don’t see how this helps us at all. If we have reduced our odds by dealing a red, I can’t see how to recover that by some conditional response. Every card dealt seems to have as much chance to make things worse as better.

    There must be a way to improve the odds – following Ken B’s well designed puzzle criteria. I can’t see what it is.

  12. 12 12 David Johnson

    Harold – you forgot BRRB and RBBR. Not that it helps unfortunately. I did the same as you and tested a few strategies but none improved the odds over 50%.

  13. 13 13 Ron

    It’s really likely that at some point there have been more black
    cards shown than red. The later this occurs, the better your odds.
    For instance, if there are three cards left, two red and one black,
    your odds of winning are 2/3. If the first card turned up is black,
    your odds of winning are 26/51.

    The problem is that the longer you wait to raise your hand, the
    lower the chance that you’ll get a favorable set of odds. I’m sure
    that the best strategy could be calculated (or at least
    computer-simulated).

    If presented with this game out of the blue, my likely strategy
    would be to hold up my hand the first time the odds are better than
    50:50. It may not be optimum, but it’s easy and is good enough.

  14. 14 14 Eliezer

    raise my hand after two blacks in a row

  15. 15 15 David Johnson

    Ron – this was one of the strategies I tried when I simplified to four cards. Still 50/50 chance of winning unfortunately. I haven’t checked if this applied as you add more cards.

    Same applies to choosing in advance when to call and waiting until the next card will be red for sure (which I now realise is equivalent to just waiting for the last card).

  16. 16 16 Sriram Gopalan

    Easy. Wait until 51 cards have been dealt. Count the number of red cards until then. If it is 25, raise your hand, otherwise (26) keep your hand down :)

  17. 17 17 Joseph

    If we wait until 51 cards are flipped, we know for certain whether we have won or lost. If we wait for 50 cards to be flipped we have either a 0%, 50%, or 100% chance of winning. There is a possibility that we know for certain whether we have won or lost as long as there are less than 27 cards remaining in the deck.

    My strategy would be to raise my hand if the odds of winning are greater than 50%, for example, if the first card was black I would raise my hand. Otherwise, I would raise my hand when there are 27 cards left in the deck.

  18. 18 18 Ken B

    This might be a special case of the Good Puzzle Theorem.
    I have a snappy solution but first a general point to convince you it’s a fair one. Whatever your strategy if it depends on the next card being the next card then it’s wrong. All we have a random sequences of cards and there is no memeory here. If the last 8 cards were balck but there are still more blacks remaining then it’s a fallacy to say the next card is likely red. In fact if there are m card remaining ther chances that any particular one of those m is red is the same no matter where in the sequence of length m it resides. See it? This means you can play Steve’s game with a simple alteration.

    You are trying to guess the the color of the *bottom card*.

    So all strategies are equal.

  19. 19 19 Chris Surridge

    The only card you can be certain of knowing in advance is the last card and that is a 0.5 probability of being red, so you might just as well raise your hand before the first card as waiting for the last. However at the beginning there is a good chance that at some point in the run more black card will have been turned over than red and so it is probably worth waiting for that to happen and then raising your hand when the odds will be a little bit better than 0.5, and at worst your odds will still be 0.5 on the final card.

    However, for every number of cards you could calculate how likely it is that the odds will be better than the current odds at some point in the future. But for simplicity let’s just look at the situation where you have more reds than blacks and then look two moves into the future. By that point either you will have two fewer blacks, two fewer reds or one fewer of each. If you draw two reds then your odds of getting a red card next are reduced, if you draw two blacks the odds are increased, but if you draw a red and a black your odds of drawing a red next are also increased (any absolute difference in the number of reds and blacks is more significant when you have two fewer cards in total) so your odds will only reduce if there is a greater than 0.5 chance that two reds will be drawn. So to a first approximation your odds of winning will likely improve as long as the fraction of red cards is less than 1 over root 2 of the total cards or about 0.7. (this holds until you get below 11 cards)

    Conversely if fewer than half the pack are red, your chances are only going to get worse.

    So at every turn you should:

    raise your hand if there are fewer reds than black (cut your losses before things get any worse)

    raise your hand if more than 7/10th of the pack is red (things aren’t going to get any better than this, really what are you waiting for).

    but if between a half and 7/10th of the remaining cards are red, hold on as you are already ahead, and things will likely swing further in your favour.

  20. 20 20 Harold

    #9 If we plot the odds of next card being red against number of cards dealt, we would start at 0.5, and then we would maybe fluctuate around that for a bit. At the end we would probably start to deviate more and more, until by the last card we would be either at 1 or 0. It is possible that the odds would never get above 0.5 – but what are the odds that we would never at some point have dealt more blacks than reds? It seems intuitive that if we did it lots of times more than half would at some point be above 0.5. After all, 50% would be there after one card, so if it is possible that we could get back above the line then some of the deals would do so.

    Example – first card is black – put up hand, odds are now greater than 50% and this would occur 50% of the time. No gain so far.

    First card red – next two cards black. Put up hand. Odds are now greater than 50%. This would occur some reasonable number of times, and added to the 50% of the time we had a black card first, we have improved our odds above 50%

    So my guess at a strategy is to put my hand up as soon as there are more blacks dealt than reds. I am not confident this is the best answer.

  21. 21 21 Zazooba

    Proof that you cannot improve on 50-50 probability of winning:

    Take as the benchmark strategy raising your hand on the last card for a 50-50 probability of winning.

    At any point in the game, the probability of winning is p, where p depends on number of black and red to date. Note that p is often not .50.

    BUT at all points in time, the payoff to the benchmark strategies IS ALSO equal to p since it is just as likely that the last card will be red as it is that the next card will be red.

    Therefore, you can never beat the benchmark strategy.

    It is simple to show that all strategies have the same payoff as the benchmark strategy.

    Ramprasad almost had this. He just needed to generalize his intuition

  22. 22 22 RPLong

    I was thinking along the same lines of both Ben @ #3 and Joseph @ #17.

    I figure that waiting until the 50th card offers a good chance of being certain of a win, like Joseph said. But waiting until only after a few cards are drawn ensures a higher pure probability of winning, like Ben said.

    So my approach would be to keep a tally of the cards as they are drawn. If I happen to get highly favorable odds toward the beginning of the game, I will raise my hand early. Otherwise, I will wait until closer to the 50th card is drawn, when the outcome is a bit more certain. Assuming it’s not cheating to count cards, this seems like the most reliable way to go.

  23. 23 23 Salim

    Let’s evaluate a strategy of guessing before either the second or the third flip, depending on what comes up first:

    p = p(R|B) + p(R|RR) + p(R|RB)
    p = 0.5*(26/51) + 0.5*(25/51)*(24/50) + 0.5*(26/51)*(25/50)
    p = 0.5

    Another strategy would be to wait until the 25th red card is flipped, then raise my hand. This strategy will work if and only if the last two reds in the deck are adjacent.

    That’s equivalent to looking for all combinations with a pattern from the back like:
    RR
    BRR
    BBBBBBBBRR

    p = sum(N=0-26) { 26*25*26!*(52-2-N)! / 52!(26-N)! }
    p = 0.5

    So even a crummy-sounding strategy (it only works if the last two reds in the deck are adjacent) returns 0.5.

  24. 24 24 Alex

    For those suggesting to wait until you’ve seen a majority of black cards: my intuition is that the gain in expected value for that case will be canceled neatly by the improbability of seeing the case at all.

  25. 25 25 Ron

    David Johnson – the number of cards is critical.

    The larger the original number of cards, the more likely there will
    be a Drunkard’s Walk that strays on your side of the odds at some
    point during the process.

    The long-term odds remain 50:50 because the cases where you happen
    to find yourself at an advantages are balanced by the cases where
    you are never ahead. That still doesn’t stop you from taking
    advantage during a particular case. That’s why casinos use
    larger decks for 21 and ban card-counters.

  26. 26 26 Salim

    I can raise my hand after I see the Ace of Spades. There’s a 26/51 chance that the 2 of clubs is followed by a red card, but that’s offset – exactly – by the 1/52 chance that the 2 of clubs is the last card in the deck.

    I can raise my hand after I see the Queen of Hearts, or before the last card is drawn if the Queen hasn’t shown in the first 52. Still 0.5.

  27. 27 27 Salim

    “2 of clubs” above should read “Ace of Spades” – I switched the chosen black card accidentally!

  28. 28 28 David Johnson

    Ron – of the 6 possible sequences in the four card simplification 3 eventually go into your favour. Of the 2 possible sequences in the 2 card simplification only 1 goes in your favour. Is it necessarily true that as the number of cards increases the more likely there will be a random walk that strays on your side of the odds at some point? I don’t think this is obvious.

  29. 29 29 Jack

    I’m going to change my strategy.

    I would raise my hand if at any point the black cards that had been overturned outnumbered the red cards by 3 or more. The odds of this happening are low, so if this didn’t happen, I would simply wait for the last card. I would wait to see though if that happened.

  30. 30 30 Robert

    I guess the answer is that you work out the probability that black will NEVER be ahead in the count. If that is 50%, then raise your hand immediately, otherwise let Steve draw until black is ahead by one.

    I can’t do the math, but it seems to me that you cannot do better than 50%.

  31. 31 31 Al V.

    It seems to me that at any point, the odds that a red card is the next card in the deck (call it N) is the inverse of the probability to get to that point (call it P), such that N*P = 1/2. Thus, it doesn’t matter when you raise your hand.

    At any point after an odd number of cards have been flipped, the odds are 50-50 that more black cards than red have been flipped.

  32. 32 32 Chris Surridge

    It depends what you mean by ‘can’t do better than 50%. The odds of winning change at each card and form a random walk around 50%. So it is almost guaranteed that at some point the odds will be better than 50:50 in your favour. If you are playing the game once then you need a strategy that attempts to identify when the odds are likely to be maximally in your favour. Jack’s ‘wait till there are 3 red cards more than blacks in the pack’ or my more complicated ‘don’t let the odds turn against you and wait until there are twice as many reds as blacks’ are such strategies.

    If you play the game multiple times I suspect that no strategy will give a net gain.

  33. 33 33 Harold

    Alex 24 -it depends what you mean by majority. I mean the majority of the cards dealt. In 50% of deals it will occur after the first card. In this case the odds of the next card being red are a shade over 50%, so you raise your hand and get better than 50% odds.

    However, the first card is equally likely to be red. No problem, maybe the next two will be black. This will occur in close to 25% of these 50% of cases, or 12.5% of the total. So in 62.5% of cases you can get better than even odds after the first three cards.

    If this never occurs, then you are left with the last card, which presumably has a probability of 50%. Or possibly not in those cases where there has never been more blacks than reds dealt.

  34. 34 34 Alan Wexelblat

    I believe that those stating that the probability of R or B on the last card is 0.5 are incorrect for a sequentially played game, though obviously correct for a random draw.

    I think the 4-card example is illustrative. In a 4-card deck I can get RRBB, RBRB, BRRB, and inverses BBRR, BRBR, RBBR. Looking at the last card there are 3 of each possible solution so the question becomes “can I tell which solution I’m playing?”

    Two solutions (BBRR and RRBB) give you 100% predictive power. Having seen the first two cards you know what the next two must be. This causes me to believe that the statements about the last card are wrong.

    I then turn to the other solutions, all of which remove RB and give me no information about the next card.

    Now I attempt to generalize this and believe that if I’m allowed to count reds and blacks I will know if I’m in a situation where my odds of getting R have improved past 50%. Since I know that my odds will tend back toward 50% as the number of draws goes on I raise my hand as soon as the chance of R increases beyond 50%.

    There’s probably a formula that expresses the chance of an event of a certain magnitude in relation to the finite nature of this game but damned if I can figure it out.

  35. 35 35 Zazooba

    There is a lot of intuition that says you should raise your hand when a lot of black cards have come up because your probability of getting a red card next is greater than .50.

    But, this intuition is wrong as I explained above. When the probability of the next card being red is high, so is the probability of the last card being red. In fact, the probabilities are exactly equal. (Think of the case where all the remaining cards are red. Waiting to the last card is just as good as taking the next card.)

    The ex ante payoff to the strategy of always taking the last card is .50. All other strategies have exactly the same payoff since all other strategies have exactly the same expected payoff at all points in time.

    Therefore, a strategy of waiting for the last card is always exactly as good as taking the next card. You can never do better than waiting until the last card.

  36. 36 36 Brian

    # 18 Ken B: “If the last 8 cards were balck but there are still more blacks remaining then it’s a fallacy to say the next card is likely red.”

    You can’t be saying that if the first 25 cards turned over are black, the next card is not (very!) likely red? I must be misunderstanding your point here.

  37. 37 37 suckmydictum

    This is a hypergeometric distribution problem, where you are looking for the card flip among all 52 which maximizes the probability of there being more red than black cards left in the deck. Clearly, you can raise your hand before the first card comes and have a 50% percent chance of winning, so you’re looking to do better than that.

    Given the hypergeometric distribution, at draw 26, the probability of there being more red cards than black cards left in the deck is .3909 (that is the culmulative hypergeometric distribution with pop size=52, successes=26, sample size=26, and sample successes between 14 and 26.) This probability gets smaller than .3909 on either side of the 26th draw, which means this is the draw which maximizes the probability of having more reds left in the deck. therefore, I think the best strategy is to wait until the 26th draw and see if you have an advantage, and if you do, you should raise your hand then.

  38. 38 38 John Faben

    Ramprasad got this completely right with the first comment.

    Consider this game: I’ll take one card out of a pack of cards. I’ll then cycle through the remaining 51 cards, you can raise your hand at any point, and then I’ll turn over the card I took out. You win if the card I took out is red, otherwise you lose.

    This is exactly isomorphic to the game Steve posted, and your chances of winning are obviously exactly 50% with any strategy. Raise your hand immediately and save yourself some time.

  39. 39 39 Ron

    David Johnson – the higher number of cards helping in the walk is
    intuitive but not obvious. If your full set of alternative cases
    scales the same as 2 and 3 cards (where exactly 50% give a temporary
    advantage), then the odds remain at 50%, and no strategy is better
    than another.

    I did a computer simulation of the strategy “If, at any time, the
    number of blacks exposed exceeds the number of reds exposed, choose
    the next card.” Repeated runs of 1,000,000 trials indicated that
    sometimes there are more wins than losses, sometimes vice-versa.

    Maybe this strategy doesn’t actually improve your odds. If the odds
    are 50:50 no matter what you do, then raise your hand right away.
    It will at least save you some time.

  40. 40 40 David Sloan

    I can do a ~63% chance of winning; I’ll post my strategy soon if people are interested.

    Meanwhile, Ramprasad’s observation is not correct. Imagine that you find yourself in a situation with 26 reds and 1 black remaining. You have a 1/27 chance of losing if you raise your hand immediately, but a 1/27! chance of losing if you keep drawing, hoping to flush out the final black card before raising your hand.

    The key insight is that you can, in some circumstances, prove that the last card of the deck is black.

  41. 41 41 Zazooba

    If you don’t believe there is no strategy better than taking the last card, then try naming a situation in which it is better to take the next card rather than the last card. You can’t because all you know at any point in time is how many reds have come up already. And all this tells you is the fraction of all remaining cards that are red. This tells you something about the probability the next card will be red, but the same can be said for ALL future cards including the last one. Therefore, the last card is always as good a choice as any of the other remaining cards including the next one. All the remaking cards are equally likely to be red.

    You can’t do better than always choosing to take the last card.

    No other card can be better than the last card.

  42. 42 42 Neil

    I see no expected advantage to waiting. Raise your hand right away and avoid the time cost of waiting.

  43. 43 43 Ken B

    @Brian 35: You are. I am saying the odds depend on the total number of reds and blacks remaining, and not on the order in which they have been seen. And the odds are the same for all m remaining cards. The “next” is a distraction. It’s the same as guessing the bottom card. So no strategy can beat 50%.

  44. 44 44 Ian

    #39: I don’t think that’s correct. In the scenario you describe, attempting to “flush out” the final black card also has a 1/27 chance of failing, as it fails exactly when the bottom card is black.

  45. 45 45 David Sloan

    Whoops, found the bug in my code. I now agree with just about everyone else in this thread; you may as well just raise your hand in any such scenario you find, equivalent to just taking the last card.

    Also, my “1/27!” comment is clearly wrong. :P

  46. 46 46 Olivier Chassé St-Laurent

    Criticisms welcome. The weak point of the ‘proof’ is showing that the process is indeed a martingale. Obviously, we’re dealing with a non-markovian process.

    It is easy to see that a 2-card game (evenly distributed) is a martingale. Less obvious is the fact that a 4-card game is also a martingale, consisting of 3 possible 2-card games. Admitting this, a 52-card game is 3 50-card games, etc., yielding also a martingale.

    Now, admitting that we are indeed dealing with a martingale, we can call on Doob’s optional stopping theorem. I am not going to prove the three conditions, as they seem obvious.

    The result is that the expectation of winning at any stopping time is the same at time 0. Thus, one should raise his hands and not waste his time.

  47. 47 47 David Sloan

    For those curious how I led myself astray, I was accidentally assuming that drawing a card was equally likely to deplete a black or a red, regardless of the current ratio of cards remaining. :(

  48. 48 48 Ken B

    All strategies are equivalent as far as winning goes. That doesn’t make them equivalent. I might enjoy the gamble, or like wasting Steve’s time.

  49. 49 49 Zazooba

    #41: “I see no expected advantage to waiting. Raise your hand right away and avoid the time cost of waiting.”

    Unless, of course you like to watch Steve flip cards, in which case you should take the last card.”

  50. 50 50 Jack

    Why not wait to see what cards appear. Consider the extremely unlikely case where even in a well shuffled deck in theory you could have most of the black cards show up in the first half. Consider the most extreme and most unlikely of cases where all the black cards end up in the first 26 cards. Once you see the 26th card is black, there is a 100% chance that the next card is red.

  51. 51 51 Jack

    In fact… Why not wait to see if you hit 26 black cards before the end? Again if you were to hit 26 black cards before the end, you are guaranteed the next card being red.

  52. 52 52 ECSL

    @46: it is a martingale, since we can prove that E[Z_n+1] = Z_n.

    After n draws, you will have picked R red cards, so the probability Z_n = (26-R) / (52-n).

    E[Z_n+1] = Z_n * (26-R-1)/(52-n-1) + (1-Z_n) * (26-R)/(52-n-1)
    = (26-R)/(52-n-1) – Z_n/(52-n-1)
    = ( 1/(52-n-1) ) * ( 26-R – Z_n )
    = ( 1/(52-n-1) ) * ( 1/(52-n)) * ( (26-R)*(52-n) – (26-R) )
    = ( 1/(52-n-1) ) * ( 1/(52-n)) * ( (26-R) *(52-n-1) )
    = (26-R) / (52-n) = Z_n.

    I used 26 red cards and 26 black cards, but it should hold for any number of cards, barring 0 and infinity.

  53. 53 53 RPLong

    Some of these comments seem to badly miss the mark to me. I agree with those people who are saying that each strategy is probabilistically equivalent, but it seems like there is a bit of a fallacy of composition going on.

    What I mean is: Even if the probability of winning the whole game is 50% no matter how you choose to look at it in advance, if you happen to be playing the game in the real world, there is enough statistical variation in a shuffled deck of cards to be used to your advantage.

    For example: Let’s say by chance the first five cards drawn all happen to be black. That puts the odds of a “next red card” at 55%, not 50%. Those are better odds than you started with at the beginning of the game. This sort of thing is likely to happen several times during the course of the 52-turn game.

    Sure, playing the game hundreds of times means that your odds of winning in the long run will ultimately be 50%. But that doesn’t mean that your odds of winning are equivalent during every turn of an individual game.

    Right?

  54. 54 54 Jack

    Let’s say you pick a strategy to wait until the 50tg card is picked and analyzeWhen the 50th card is picked, there are only 3 possibilities.

    26 red
    24 black
    —–
    26 black
    24 red
    ——
    25 black
    25 red

    You’re screwed in 1/3 of the cases. You are definitely going to win in 1/3 of the cases. And you have a 1/2 chance of winning in 1/3 of the cases. Overall, when you multiply the probabilities out, you still have a 50% chance of winning by agreeing the this strategy beforehand.

    Personally, I think the best strategy may be to look where you stand after 26 cards were pulled. If you happen to have a lean toward more black cards, then raise your hand.

  55. 55 55 Jeff Semel

    I agree with with RPLong (53).

    This puzzle looks like a variation of the optimal stopping problem. My strategy is to calculate the odds after each of the first 19 cards you see, as RPLong describes. Then say stop the next time the odds are better than any of the previous odds you’ve seen, or before the last card if better odds never occur.

    I first read about optimal stopping in Martin Gardner’s column in Scientific American, in a puzzle about how to pick a spouse. The approximation he discussed was to let (1/e)*N potential partners go by, where N is the total, maximum number you can ever see. Then pick the next best. (This is sometimes called the 37% rule.) The approximation is more accurate for larger N.

  56. 56 56 Martin-2

    RPLong (53): “Let’s say by chance the first five cards drawn all happen to be black. That puts the odds of a “next red card” at 55%, not 50%”

    But it’s equally likely that the first 5 cards will be red, in which case your odds are 45%. If you wait to gather more information you only expect it to be good news when you’re already behind.

    My strategy uses the fact that Steve said “or” and not “xor”, so I raise my hand before every card and my odds of winning are 2600%

  57. 57 57 Ken B

    @RP Long 53: No, wrong. Right now, while Steve is still warming up, you have a 50% shot. Now as you wait and Steve turns cards your (conditional) probability will indeed fluctuate. But it’s as likely to have gone down as gone up. If the first 5 are black your chances of getting a red now have indeed gone up. But you were as likely to have seen 5 red cards as 5 black ones. In this case your probability of winning has declined. Your cahnces of getting a black are now EXACTLY what your cahnces of getting a red were. BOTH these cases must be charged to your strategy.
    Your answer would be right is after a few cards you could say “OK, reshuffle and then I’ll just settle for 50%”, but you can’t.
    This is true no matter what your waiting strategy is. The easiest way to see this is to notice that for any sequence rbbrr etc you have observed the inverse brrbb is equally likely to have happened. Your wait has turned a 50% guess into two equally likely guess, one of x% and one of 100-x%. That’s no improvement.

    Except you got to see Steve deal, which is a treat let me tell ya.

  58. 58 58 Jack

    I see the error of my ways now. I was only focused on the odds of getting more black cards for some reason.

    I would now after being enlightened simply raise my hand before he started.

  59. 59 59 RPLong

    Martin 2 and Ken B – (56 & 57)

    If I understand your rationale correctly, then what you are saying is that the 1st, 2nd, 3rd, …, 51st, and 52nd card all have a 50-50% of being red or black, assuming a “fair shuffle.” If so, I can understand your reasoning. We’re not drawing balls from an urn, as in the classic probability problems, but rather we’re drawing cards from a sequence, and each position in the sequence has a 50-50 chance of being either red or black.

    That makes sense. Do I have the right idea here?

  60. 60 60 Jack

    The way I understand it now is that you may very well of course get better odds with more information but you may equally as well get worse odds. Therefore, it doesn’t make much sense to see if there are 20 black cards in the first 26 or whatever because they’re could equally be 20 red cards which would hurt you in the same way that 20 black cards would help you.

  61. 61 61 RPLong

    @ Jack (60)

    If we define this as an urn problem, then we update the probability of drawing a red ball every time we draw a ball. So, probability of red starts out at 50%, then rises to 50.1% if a black ball is drawn, and so on, until there are no longer any balls left in the urn.

    The only way I can conceive of a constant 50% probability is as follows:

    (1) We play the game an infinite number of times, with the same strategy of winning each time.

    (2) The act of shuffling is the only point during the game where probabilities are assigned; thenceforth, the sequence of cards simply is what it is. Since the N-th card has a 50-50 chance of being red, the probability of winning the game when the N-th card is drawn is 50% for all N. (As described by my comment in 59.)

    To me, the question is whether (2) is rational than updating one’s probability calculation after each card is drawn, as per the urn problem.

    If the first 5 cards are all black, and I haven’t yet raised my hand, then it is fact that 26 of the remaining 47 cards are red. 26/47 = about 55%. So the question seems to be whether the probability of the 6th card being red is 26/52 (probability that a red card was shuffled into the 6th position) or 26/47 (probability of each of the remaining cards being red).

    I am not sure what is the most rational way to answer that question. Both solutions seem compelling to me.

  62. 62 62 Ken B

    RP Long:
    Nope. Try this thought experiment. decide on your strategy. You are waiting to see what happens. You see a sequence of cards and make a decision. You make the decision when the odds tipped you think sufficiently far in your favor. Now look at the sequence of cards you observe, and replace it with the sequence or all the clubs are turned into diamonds and all the hearts and spades and vice versa. The new sequence is equally likely to of happened as the sequence you observed. But in that new sequence all the reds and blacks are reversed. So whatever odds you gained or lost in the first case when you waited you have lost or gained in the second case. When you finally made your choice your odds were probably not 50-50. However you could have found yourself in this reverse position where the odds would be exactly reversed from what you really faced. The two cases being equal the average is still exactly 50%.

  63. 63 63 RPLong

    Ken B – the reason I do not like that explanation is because it involves assigning a non-zero probability to an event that was not observed. I am not interested in what the state of the remaining 47 cards would have been, had I observed something else. I am only interested in the state of the remaining 47 cards in light of what I actually did observe.

    In other words, I think I agree with your conclusion, but not with your explanation. I think the probability of the N-th card being red is 50/50 and is not conditional on sequence. But that has nothing to do with “an equal probability that the observed sequence might not have happened.”

    Maybe we are saying the same thing in a slightly different way, but I feel where I am going with this is cleaner.

    Having said that, I just googled the phrase “Martingale” and can no longer participate in the discussion. :)

  64. 64 64 Jack

    RPlong,

    The way I would think about it is that the odds of you being helped by more black cards showing up early are exactly equal to the odds of more red cards showing up. And if the red cards show up your odds get screwed to the same degree that black cards would have helped your odds. But you have no idea whether you’ll get screwed or helped or neither because the odds are the same for all before you start playing.

  65. 65 65 nobody.really

    This puzzle looks like a variation of the optimal stopping problem. My strategy is to calculate the odds after each of the first 19 cards you see, as RPLong describes. Then say stop the next time the odds are better than any of the previous odds you’ve seen, or before the last card if better odds never occur.

    I first read about optimal stopping in Martin Gardner’s column in Scientific American, in a puzzle about how to pick a spouse. The approximation he discussed was to let (1/e)*N potential partners go by, where N is the total, maximum number you can ever see. Then pick the next best. (This is sometimes called the 37% rule.) The approximation is more accurate for larger N.

    I think RPLong and Jeff Semel are onto something.

    1. I had a similar reaction to Jeff’s: This feels like the Spouse Selection problem. (That problem involves finding a strategy for picking a relatively high-valued card from a deck when you know the number of cards in the deck, but not the range of values, and you flip through the cards one at a time, once. I think you assume that the values are normally distributed.)

    But that problem has important differences from Landsburg’s problem. First, the Spouse Selection problem involves finding an optimal strategy to achieve a high outcome. Landsburg’s problem involves finding an optimal strategy for finding one of two discrete outcomes.

    Second, in the Spousal Selection problem, you predict that future cards will be like past cards; if you draw a lot of 10s, this bolsters your confidence that the next card will be something like 10. In Landsburg’s problem, you predict that future cards will be UNLIKE past cards; if you discard a lot of reds, you bolster your confidence that the next card will NOT be red.

    Third – and most important – for all practical purposes, the value of each Spousal Selection card is independent of the value of prior cards; the value could be randomly generated for all it mattered. In Landsburg’s problem, the value of future cards is constrained by the past cards; indeed, you can predict the value of the last card 100%.

    2. The strategy suggested by RPLong and Jeff Semel presumes that we can derive benefit by opportunistically exploiting favorable distributions if and when they occur.

    This strategy seems intuitive right to me. Imagine my strategy was to simply wait until the last card; my odds of winning would be 50%. But if I add to this strategy the strategy of raising my hand whenever Landsburg has discarded more black cards than red cards, my odds of winning should improve to … well, to something over 50%. Worst case scenario: I wait until the last card – a strategy that we already determined had a probability of success of 50%. So, judged from the perspective before any cards are turned, this RPLong/Semel strategy doesn’t result in a lower expected value than 50%, and often will result in a higher expected value. What’s not to like?

    3. Wandering a bit off-topic: Consider Murphy’s law – “Anything that can go wrong, will – eventually.” This suggests that over an infinite horizon, all possible outcomes obtain. How reliable is such a prediction when results cumulate?

    To rephrase, what conclusions can we draw about the probability of quirky opportunities arising in infinite series? Rather than having a deck of 52 cards, imagine you do Landsburg’s problem with a deck of infinitely many cards. Under that scenario, you could wait forever for a favorable probability to arise; indeed, you could wait until a favorable probability of any specific magnitude to arise. (“I won’t raise my hand until black cards are 60% of the discarded cards.”) But by the same token, after a million cards have been discarded, it seems decreasingly likely that the pattern of future discards will alter the ratios very much.

    Can we draw any conclusions about the likelihood of EVER reaching a 60% ratio when considering an infinite future? Again, can we assume that all conceivable outcomes arise over an infinite time horizon – even as the cumulative results make deviations in the future seem unlikely?

  66. 66 66 Zazooba

    #53 (and many others) “Sure, playing the game hundreds of times means that your odds of winning in the long run will ultimately be 50%. But that doesn’t mean that your odds of winning are equivalent during every turn of an individual game.”

    This misses the point. Yes, your odds of winning fluctuate over time. But that doesn’t mean you can improve your expected payoff at any point in time.

    The take-the-last-card-strategy approach demonstrates this. At any point in time, your chances of winning are no better than if your strategy is to take the last card. Yes, sometimes your probability of winning if you take the next card is > .50, but YOUR PROBABILITY OF WINNING IS ALSO > .50 IF YOU TAKE THE LAST CARD. Therefore, you can never beat a strategy of taking the last card. Therefore, no strategy has a better ex-ante outcome than a .50 probability of winning.

    I think the logic is confusing people. (1) No strategy beats take-the-last-card, (2) Expected ex ante payoff to take-the-last-card is .50, (3) therefore, no strategy can beat .50; therefore, taking the first card is just as good as taking the last card.

    The take-the-last card approach simplifies analysis, because (1) it is a feasible strategy at any point in time, (2) it is easily shown to be as good as any other strategy at any point in time, and (3) we know its ex ante payoff. We don’t have to do anything but trivial math using the approach.

  67. 67 67 David Sloan

    #65: Once I corrected the bug in my first draft, my program claims that it *never* makes a difference to draw another card before stopping. Sadly, there is no sense in which we can gain by “opportunistically exploiting favorable distributions”.

    Of course, I’ve gotten this once already, so take this with a grain of salt. Even better, peer review me: http://pastebin.com/fBCKbPD3

  68. 68 68 Zazooba

    At first, my intuition was that this was an optimal stopping problem, and that you should take the next card when the odds swung in your favor.

    This is misleading, however. In an optimal stopping problem, you throw away a good draw to continue taking draws, AND SUBSEQUENT DRAWS ARE NOT AS GOOD, ON AVERAGE, AS THE ONE YOU THREW AWAY. (In the mate-choosing model, you continue to draw from the same pool of potential mates, so the subsequent mates are not as good, on average, as the one your threw away. But, in this problem, you don’t know the quality of the mate you will get, you only know the expected value of the mate, and that expected value is the same as all the subsequent draws as well. So, the mate-selection paradigm is not the same.

    This problem WOULD be the same as the mate-selection problem if Steve showed you a card AND THEN you chose whether to keep it. In that case, the optimal strategy is to take the first red card you draw: Payoff = 1.0 for certain. THAT would be the analog of the mate-selection problem in this situation.

  69. 69 69 David Sloan

    “Of course, I’ve gotten this *wrong* once already…”

  70. 70 70 Zazooba

    #65:

    “This strategy seems intuitive right to me. Imagine my strategy was to simply wait until the last card; my odds of winning would be 50%. But if I add to this strategy the strategy of raising my hand whenever Landsburg has discarded more black cards than red cards, my odds of winning should improve to … well, to something over 50%. Worst case scenario: I wait until the last card – a strategy that we already determined had a probability of success of 50%. So, judged from the perspective before any cards are turned, this RPLong/Semel strategy doesn’t result in a lower expected value than 50%, and often will result in a higher expected value. What’s not to like?”

    The following part is incorrect: “Worst case scenario: I wait until the last card – a strategy that we already determined had a probability of success of 50%.”

    Under your strategy, if you wait to the last card, the probability of success is less than 50% because you only get to the last card if the draws have been bad up to the end (more reds than blacks). So, it is an error to think that if you never raise your hand you somehow end up with the last card being 50/50 black/red. Using the last-card language, if the deck is going against you (too many reds to date), yes the next card is red with probability < 50%, but the last card is also red with exactly the same probability, so NOT taking the next card doesn't improve your chances vis a vis the last card strategy.

    Nothing ever matters in this game. We should call it the Sartre game.

  71. 71 71 Yancey Ward

    In some respects, this reminds me of the coin toss problem in optimal stopping where you have a coin that you toss randomly, and you get paid a reward that is based on the average number of heads you have tossed up to the point of your choosing. As you toss the coin more and more times, however, you will approach the limit of 1/2 heads, 1/2 tails. If you are lucky, you get head the first time and the maximum payout, but if you don’t, the math (well beyond me) can help you assess at what point it makes sense to stop afterwards. There I don’t think it is the case it makes sense to stop when you have reach 1/2 heads and 1/2 tails tossed.

    However, this game, played once– I would just wait for a point where there are more red cards than black left in the deck. The odds that this never occurs are small, but for a one off, it is how I would approach the game. Of course, one can then ask the question about what surplus of red cards left would I wait for? Would I wait for a 1 surplus or a 2 surplus or a 3 surplus…etc. That, I don’t have an answer for. I might well move the instant the odds are better than 50%.

  72. 72 72 Mike H

    As long as you vow to take the last card when there’s only one card, the chance of winning is 50% no matter what you do

    Here’s my proof.

    A strategy p is a map from (r,n), giving the probability that you will choose a card when there are r red cards left in the pack of n.

    Assume: we vow to take the last card, so p(0,1) = p(1,1) = 1.

    w(r,n;p) is the chance of winning with strategy p, when there are r red cards left out of n.

    Theorem: w(r,n;p) = r/n. In particular, w(26,52;p) = 50%.

    Proof: by induction.

    Since p(0,1) = p(1,1) = 1, clearly w(0,1;p) = 0/1 and w(1,1;p) = 1/1.

    Assume, now, that w(s,n-1;p) = s/(n-1), for all s and p.

    What is w(r,n;p) ?

    There are three cases:
    * I draw the next card. This takes place with probability p(r,n). My chance of winning, given this case, is r/n.
    * I reject the next card, which turns out to be red. This takes place with probability (1-p(r,n)) * r/n. My chance of winning, given this case, is w(r-1,n-1;p).
    * I reject the next card, which turns out to be black. This takes place with probability (1-p(r,n)) * (n-r)/n. My chance of winning, given this case, is w(r,n-1;p).

    By the inductive step, w(r-1,n-1;p) = (r-1)/(n-1) and w(r,n-1;p) = r/(n-1).

    My chance w(r,n;p) is therefore

    p(r,n) * r/n
    + (1-p(r,n)) * r/n * w(r-1,n-1;p)
    + (1-p(r,n)) * (n-r)/n * w(r,n-1;p)

    =

    p(r,n) * r/n
    + (1-p(r,n)) * r/n * (r-1)/(n-1)
    + (1-p(r,n)) * (n-r)/n * r/(n-1)

    =

    p(r,n) * r/n
    + (1-p(r,n)) * (r^2 – r)/(n^2 – n)
    + (1-p(r,n)) * (nr – r^2)/(n^2 – n)

    =

    p(r,n) * r/n
    + (1-p(r,n)) * (r^2 – r + nr – r^2)/(n^2 – n)

    =

    p(r,n) * r/n
    + (1-p(r,n)) * (nr – r)/(n^2 – n)

    =

    p(r,n) * r/n
    + (1-p(r,n)) * r/n

    which equals r/n.

    QED.

    Note that this proof assumes the deck is properly shuffled. If the deck is not properly shuffled, the result doesn’t hold.

  73. 73 73 Mike H

    Ian @4 My strategy is to write a python script…

    Mine was to write a few lines of algebra. We came to the same conclusion though.

  74. 74 74 Ken B

    RP Long
    No again, mine is spotlessly clean! Here’s a more formal Way to put it. The space of observable events is the space of ordered sequences of 52 cards. It is possible to define an isomorphism on this set by associating each club with the corresponding diamond and vice versa and eat spayed with a corresponding heart and vice versa. Under this isomorphism each sequence cards which gives rise to an associated sequence of colors has a doppelgänger which is identical in frequency ( ie 1) within the set and gives rise to the reverse sequence of colors. In evaluating the effectiveness of the strategy it must be tested against the entire space of possible sequences. You test your strategy by seeing if on the space of possible sequences it gives a result better than 50%. Because of the symmetry it does not.
    But again the intuitive shortcut is the same. After you have selected and cards the remaining 52 minus and cards. The chance that any one of these is red or black is the same for each of these remaining cards. Therefore the strategy for the game is equivalent to guessing the bottom card the deck the last card that would be revealed. Starting from the initial condition it is clear that answer is 50%. No strategy can change that because no sequence of playing the cards other than the bottom card can change the prior probability of what the card was.

  75. 75 75 Starman

    Shades of the Monty hall problem, no?

  76. 76 76 Capt J. Parker

    I’m convinced you can do better than 50/50 with a card counting strategy. Count the cards this way: Each red is -1, each black is +1. For example: If 10 cards have been dealt and what came up are 6 reds and 4 blacks then the count is -2. A brute force simulation in excel of 100 deals of a full deck gave me the following distribution of maximum counts over the course of dealing 52 cards:
    Max count / Frequency
    0 5
    1 9
    2 23
    3 10
    4 15
    5 12
    6 9
    7 9
    8 3
    9 5
    10 or more 0
    (looks like a binomial distribution in a histogram ?)

    Surprising but, 86% of the time there is a point in the deal where at least 2 more blacks have been dealt then reds. And only 5% of the time do you have a deal where there is never a point where more blacks have been dealt than reds (Max count =0). So, I reason that if you wait for the count to be +2 and then raise your hand you will do better than 50% wins since you will be betting with the odds slightly in your favor 85% of the time. I ran 150 simulations and won 57% of the time. Not statistically significant but, I’m tuning my model so I can run a thousand simulations and see what happens. I would think an optimal strategy would factor in the number of cards dealt as well as the count. Someone out there should be able to come up with a closed form solution for the optimal strategy.

  77. 77 77 Capt J. Parker

    This is a repeat post with the data table reformatted with /s. Sorry for the double post.

    I’m convinced you can do better than 50/50 with a card counting strategy. Count the cards this way: Each red is -1, each black is +1. For example: If 10 cards have been dealt and what came up are 6 reds and 4 blacks then the count is -2. A brute force simulation in excel of 100 deals of a full deck gave me the following distribution of maximum counts:
    Max count / Frequency
    0/5
    1/9
    2/23
    3/10
    4/15
    5/12
    6/9
    7/9
    8/3
    9/5
    10 or more/0
    (looks like a binomial distribution in a histogram ?)
    Surprising but, 86% of the time there is a point in the deal where at least 2 more blacks have been dealt then reds. And only 5% of the time do you have a deal where there is never a point where more blacks have been dealt than reds. So, I reason that if you wait for the count to be 2 and then raise your hand you will do better than 50% wins since you are betting with the odds in your favor 85% of the time. I ran 150 simulations and won 57% of the time. Not statistically significant but, I’m tuning my model so I can run a thousand simulations and see what happens. I would think an optimal strategy would factor in the number of cards dealt as well as the count. Someone out there should be able to come up with a closed form solution for the optimal strategy.

  78. 78 78 prior probability

    This puzzle reminds me of the secretary problem or optimal stop problem, and if my analogy (as well as my recollection of those problems) is correct, then the optimal stopping point would occur after going through 1/3rd of the deck of cards (or thereabouts)

  79. 79 79 Jeff Semel

    I am now persuaded there is no optimal stopping strategy that will secure any advantage.

  80. 80 80 Harold

    Ken B – I also don’t like your explanation, although I wouldn’t go as far as to say you were wrong. “So whatever odds you gained or lost in the first case when you waited you have lost or gained in the second case.” Well, not quite as you wouldn’t have called at the same time.

    Lets say we have a sequence BBRRR. We would call after two cards. However it is equally likely that the cards were reversed. OK, in that case we would call after 5 cards. In each case there are more reds than blacks left in the deck when we call.

  81. 81 81 Andy

    I would wait for all the black cards to run out then raise my hand. If the reds run out first I wouldn’t raise my hand at all.

  82. 82 82 Jack

    Andy, I thought the same way originally too until I was lead by hand to an enlightened path. You might as well raise your hand before he flips over the first card.

  83. 83 83 Ken B

    Harold: What exactly is your rule?

  84. 84 84 Harold

    Capt. J Parker. In those 5% cases where the count was never above 0, how often was the last card red? You may have a slight edge in 95% of runs, but that could all be lost if you have a big loss in those 5% of cases.

  85. 85 85 Harold

    #83. I have abandoned any rules. I was never convinced it would be possible to beat 50%, but could not see the flaw in the method where you wait until there are more blacks than reds dealt then raise your hand – basically the “random walk” approach. It seems that using this approach you can select a point at which there are more reds than blacks in the pack. From Capt J Parker’s analysis, this occurs in 95% of cases. However, as I said above, this could all be lost if many of those 5% of cases had black as the last card.

    The count at the end must be 0. If zero is the highest count we reach, and we have got to the last card without there ever having been more blacks than reds, we must have dealt 26 reds and 25 blacks or else we would have a count of +1. The last card must be black. Thus this method has an approximate 5% failure rate built in. I guess this cancels out the small gains from the 95% of cases where we get an edge.

  86. 86 86 Jonathan Kariv

    I’m late to the party on this one (stupid rss feed delays, mutter mutter).

    Anyway I think all stratergies are the same. Pretend the value of the prize is 1. Then I want to define V(r,b) as the value of playing this game with an r+b card deck with r red cards and b black cards assuming optimal play.

    My claim is that V(r,b)=r/(r+b). Which means in particular V(26,26)=1/2. Which we can achieve by picking the first card.

    Proof (I think): For a 1 card deck (either 1 red or 1 black) then clearly the claim is true. I want to induct on r+b.

    If we go immediately then we win r/(r+b) in expectation. If we delay then in expectation we win V(r-1,b)*r/(r+b) + V(r,b-1)*b/(r+b). We have by the inductive hypohesis the values of V(r-1,b) and V(r,b-1) so do some algebra and we get the result.

    Essentially the same proof works on W(r,b) the value of he game under he worst possible play.

  87. 87 87 nobody.really

    Ian @4 My strategy is to write a python script…

    Mine was to write a few lines of algebra. We came to the same conclusion though.

    Really? In my experience, python scripts produce sillier conclusions — e.g., “Good thing I didn’t mention the dirty knife.”

  88. 88 88 RPLong

    @ Ken B #74

    I don’t disagree with anything you’ve said there. What I disagree with is the idea that you can assess the effectiveness of your strategy within a particular instance of the game by looking at the long-run payoff of applying that same strategy to every instance of the game.

    You are almost making an EMH argument: Nobody beats the street in the long run. I agree, but that doesn’t mean you can’t beat the street in the short run. That’s the whole debate about the EMH. Taken too seriously, it leads to silly perspectives like refusing to see obvious speculative bubbles.

    I think you’re making a similar error when you observe five black cards in a row and make a statement about how it is equally likely that you would have seen a different array of five cards. The probability of an observed observation is always 100%, because there is nothing stochastic about real-world observations.

    Hence, if the first 5 cards happen to be black, 55% of the remaining cards are red as a matter of pure arithmetic. I agree that the 6th card in any shuffled deck – for any instance of this game – is always 50%, but from the perspective of the game I’m playing, 55% of the remaining cards are red. It is at this point more likely that the 6th card will be red than black.

    Before the game started, though, I had only a 50% chance of guessing that the 6th card would be red.

    I keep coming back to the sense that there is a fallacy of composition involved somewhere. The probability that any one card in the deck is red definitely equals 50% and is unconditional on the redness or blackness of any other card in the sequence. That much is obvious enough to me.

    But I am not so sure it can be demonstrated that – within a single instance of the game – a guesser can’t take advantage of a slightly unequal sequence.

  89. 89 89 Jack

    RPLong: “But I am not so sure it can be demonstrated that – within a single instance of the game – a guesser can’t take advantage of a slightly unequal sequence.”

    You can take advantage of it in the sense that if it were to happen (say, you get to a point where there are 10 more black cards than red cards), then you have an advantage at that point. But before te cards started rolling off of Landsburg’s hand, it was equally as likely that as you waited for 10 more or however many black cards to show up that instead 10 more or however many red cards would show up, thus hurting your chances that the next card would be red instead of helping. When you average all those odds out, it ends up that you can never have a strategy that will be better than 50%… That is not the same thing as saying if you played a single game that you would not come across a pattern where you would know the next card has a higher odds than 50% of being red. It’s just that before you started playing you had to same odds of encountering a situation where your odds could be below 50% and those odds offset each other before you start playing.

  90. 90 90 RPLong

    Jack – I have a sense that you and Ken B are absolutely correct, but I am still struggling to reconcile it in my mind.

    Here’s a question: Do you consider it equally likely that shuffling the deck will produce a perfect alternating RBRBRB… pattern as it will produce a pattern in which, at some point during the game, you will face better-than-50% odds?

  91. 91 91 Ken B

    @RP Long 88:
    I’m going to get abit formal here because I think you are confused about something fundamental about probability theory, not this puzzle.

    I am making the quite uncontoversial claim that a strategy is a mapping from the set of possible sequences into a 2 member set {r,b}. Then there is a further mapping from {the sequences} x {r,b} -> [0,1] which defines the pay off. (J Kariv gives this second mapping explicitly.) The expected value of my strategy may thus be calculated *before Steve begins*. My symmetry argument is that the symmetries of the second map and the symmetries in the sequence space prove that the expected payoff is .5 for all such strategies.
    Now that’s not a user friendly way to express the argument, but you seem to think we cannot give probabilities to past events which did not occur. My argument does not depend on doing that. It’s a purely orthodox bit of probability theory: go back to the sample space and count.

  92. 92 92 Ben Kenendy

    “But I am not so sure it can be demonstrated that – within a single instance of the game – a guesser can’t take advantage of a slightly unequal sequence.”

    Here’s a way to think of it –

    Person A is the “stubborn waiter” – his strategy is to always take the last card. The odds winning are 50/50

    Person B is the strategist – he waits for some favorable situation where there are more winner cards left than loser cards. Then he pounces!

    When “B” raises his hand, the probability of the next card being a winner is the same as the last card being a winner, because the cards are random. “B”, despite all his plotting and strategizing, will never be better off than “A”, who agreed from the very beginning to take the last card with 50/50 odds.

    I think the snag that runs up against intuition is that “B” might get a favorable run of cards that increases his chance of winning above 50%. This is true, but *if* that happens, person “A”‘s chances have gone up the exact same amount. Once again, “B”‘s strategy is not favorable against “A”, who had 50/50 odds from the outset.

    Imagine a deck with 999,999 loser cards and one winner card – it should now be pretty intuitive that the probability of stopping at the exact right time (card 295,199 for example) is the same as the card being exactly at position 1,000,000 in the first place.

  93. 93 93 RPLong

    Ken B – No arguments there. The funny thing to me is that I understand this better on a technical level than I do on an intuitive level.

    See my post @61. After observing 5 black cards in a row, is the probability that the 6th card will be red 26/50 or 26/47? For me, that’s the crux of the issue.

    I understand that I don’t have to ask that question in order to make a statement about the initial expected value of a win, but how do I make the argument that the probability of the 6th card’s being red is something other than 26/47? If I knew how to make that argument, then I would have some “mastery” over this question. ;)

  94. 94 94 RPLong

    That should read “26/52 or 26/47” of course…

  95. 95 95 Capt. J Parker

    Harold 84,
    The last card was never red. If it had been, the count on the 51st deal would have been 1. I know that I don’t have airtight proof that you can beat 50% odds but, I still think you can do better than 50%. Here is some real rough estimating:
    1) I raise my hand when I get a count of 2
    2) I estimate that the average number of cards dealt when I get my target count of 2 is 26 cards.
    3) So, my average odds of winning are are 15/26 = 57.7% This seems like it should compensate for a total loss of the 5% of deals that never have a count greater than zero.

    And I’m still quessing an optimal stradegy will something like:
    Raise hand if the count is 2 within the first 30 deals, otherwise raise hand if count is 1 within the next 10 deals, otherwise raise hand when the count is zero.

    The probability calculations seen daunting. But I would think you can prove you can beat 50% with a Monte Carlo simulation. I can’t get to it till tonight.

  96. 96 96 Harold

    #94, #92. It is this bit of information that reconciles it intuitively for me. Lets take my earlier strategy of waiting until you have one more black dealt than red in the “random walk”. Capt. Parker finds empirically that this happens in 95% of cases. In every one of these cases it would seem that you could take advantage of there being more reds in the remaining pile to get a slight advantage. But, in 5% of cases you are guaranteed to loose, because there are never more blacks than reds dealt, and in those cases the last card must be black. If the cards are arranged in such a way then you will lose. It seems intuitively possible that those 5% guaranteed losses exactly balance out the slight gains in the other 95% of cases.

  97. 97 97 nobody.really

    I’m certainly willing to be persuaded by the results of simulations. But as with some others, I’m losing confidence in my ability to beat 50%.

    1. A strategy of “Pick the last card” will win 50% of the time, judged before any cards are flipped.

    2. A strategy of “Pick the next card when the odds are in your favor (and wait until the last card otherwise)” will result in a winning strategy of < 50% when you get to the last card, because it will have systemically excluded all the circumstances in which the odds of the last card being red exceed 50%.

    3. But how often will you GET to the last card? The odds will be in your favor at some point MORE THAN 50% OF THE TIME. That is, 50% of the time the first card will be black; bingo, the odds that the next card will be red are already in your favor. And even if the first card is red, under SOME of the remaining scenarios (albeit fewer than 50% of them) the number of black discarded cards will eventually equal or exceed the number of red discarded cards before you reach the end of the deck. Bingo, we have an aggregate “odds in your favor” rate of 50%+.

    4. To reach the conclusion that we can’t improve on a 50% success rate in aggregate, I need to conclude that something about the strategy of “Pick the next card when the odds are in your favor” is systemically tied to super-duper bad odds when the odds are NOT in your favor.

    Now, in a game in which the outcomes are continuous — the outcomes various continuously within a range – lots of wins can be offset by a few big losses. But here, we’re just counting wins and losses; in effect, these outcomes are weighted equally. In a binary world, gaining a statistical advantage 50%+ translates directly into winning 50%+ of the time, right?

    Well, yes and no. Yes, this means my strategy will succeed more than half of the time, with respect to more than half of the cases. But that predicts a victory of only 25%+ of cases overall (50%+ x 50%+). I would expect to lose nearly 50% of the cases even when I had a statistical advantage. And I’d expect to lose more than 50% of the cases when I never achieved a statistical advantage and simply waited for the last card, even though this circumstance would arise less than 50% of the time. I can’t calculate the overall expected value of all of this – but it sure looks like it could be close to 50/50.

  98. 98 98 Jeff Semel

    @CaptJ. Parker #94

    The probability calculations seen daunting. But I would think you can prove you can beat 50% with a Monte Carlo simulation. I can’t get to it till tonight.

    I did a Monte Carlo last night with several different strategies and was unable to beat 50%.

    After I realized my initial strategy (#55) was wrong, I thought it was a dynamic programming problem, but the arguments of Mike H and Ken B are persuasive.

  99. 99 99 Jeff Semel

    Oops, italic quotation is reversed.

  100. 100 100 Kirk

    95% of the time you get a slight advantage above 50%, 5% of the time your odds are 0.

    That is the point that needs to be made to account for the “feeling” that you should be able to beat 50%

  101. 101 101 Jonatan

    Sloan > Script seems right to me, and that was a neat memoize function to boot. (I don’t understand why you are writing Fraction(1,1) instead of 1 in line 7 though.)

  102. 102 102 Harold

    “is systemically tied to super-duper bad odds when the odds are NOT in your favor.” Indeed – if I am right it is a certainty. If the last card is not black, at some point you must have arrived at a point where the odds were in your favour.

  103. 103 103 Capt. J Parker

    @ nobody.really 96,
    You will get to the last card only 5% of the time according to my sim in 77.

    Jeff Semel 97,
    Ken B’s argument in 91 doesn’t quite do it for me because when going through the deck once you can easily have a point where the odds are very much in your favor at one point and very much aginst you by the same amount at another point. An “unfavorable” shuffle is not mutually exclusive to an favorable shuffle. 95% of shuffles are in your favor at some point during the deal.

  104. 104 104 Kirk

    My intuitive argument against “you can’t do better than 50%” is that when you get this much unanimity in response to a PUZZLE, the accepted answer must be wrong.

    I do have difficulty describing why, “I will pick the last card UNLESS the count gets to +5” would not work out in the long run.

  105. 105 105 Yancey Ward

    Andy at #81 may well have the solution given Landsberg hasn’t fully foreclosed the idea of reshuffling and starting over if a hand has not been raised.

  106. 106 106 nobody.really

    If the last card is not black, at some point you must have arrived at a point where the odds were in your favour.

    I guess this is true, in a reductive sense. Imagine the deck is organized with 25 red cards, then 26 black cards, then the final red card. The odds would never be in my favor until I reached that very last card – when, as you say, the odds would go 100% in my favor.

    May the odds be ever in your favor. If you insist on them being in your favour, forget it. I’ve already made a Monty Python allusion; that should be enough anglophilia for one thread.

  107. 107 107 Jack

    I really do think the simplest way to understand this is before the game begins think to yourself … The odds of getting a favorable string of cards must be the same as getting an unfavorable string of cards… in a symmetrical manner. So I figure what’s the point and raise my had immediately before the first card is drawn.

  108. 108 108 Olivier Chassé St-Laurent

    Here is the full proof (continuing #46) that it is irrevelant when one raises his hand.

    First, proving that we are dealing with a martingale. For full generality, we are not assuming any particular number of black and red cards, except that the number of total cards has to be at least 1 (not a stringent condition when picking a card).

    Assuming that the stochastic process X (probability of winning) is adapted to a filtration F, it is F-measurable (which one can go by counting the remaining cards) and its expectation is finite (between 0 and 1), we need to show that for all t and s (t > s), the expectation of X at time t is equal to its value at time s. I am only going to prove that it is true for t = s + 1, since one can use it inductively (iterated expectations) to show that it is true for any other t.

    Given that we have n black cards, and m red cards,

    X_s = m / (n + m)

    E[X_t | F_s] = P(B) * (X_t | B) + P(R) * (X_t | R)
    = [n / (n + m)] * [m / (n + m – 1)] + [m / (n + m)] * [(m – 1) / (n + m – 1)]
    = [m / ( (n + m) * (n + m – 1) )] * (n + m – 1)
    = m / (n + m)
    = X_s

    This shows that the stochastic process X is indeed a martingale.

    To apply Doob’s optional stopping theorem, we need to prove 3 conditions:

    (1) the stopping time u is a.s. bounded (we can add a rule to force the player to raise his hand on the last card, just in case)
    (2) the stopping time u has a finite expectation (u < n + m, obviously) and the martingale increments are bounded (equal to 0, as shown above)
    (3) the absolute value of X_u^t for all t is a.s. bounded (also obvious, as X is always between 0 and 1 for t < n+m or u < n+m)

    Thus, by this theorem, if the number of cards in the deck is finite (in this case, 52 or n+m) and greater than zero, the expectation of winning at any stopping time is equal to the probability of winning at time 0.

    Interestingly, and surprisingly, this shows that for any finite deck size greater than 0, and for any distribution of n and m black and red cards, respectively, one cannot improve his chances of winning by waiting.

    One can think of it in terms of financial derivatives. We are dealing with a compound option: an asian option on a binary option of 1-period maturity. This shows that the maximum price one should pay and the minimum price one should accept on such an option is m / (n + m) the payout, or the initial probability of winning. While the value of the compound option may vary, the probability of each move (up or down) compensates for it.

  109. 109 109 Al V.

    If I have a deck of 8 cards, 4 black and 4 red, then there are 70 possible sequences of cards. If I use the strategy that as soon as I have seen more black cards than red, my chance of winning is 50%.

    – 35 times, the first card will be black and I raise my hand. The second card is red 20 times, and black 15.
    – 10 times, the first card is red, but then I get two blacks. 4 times the 4th card is black and 6 times it is red.
    – 6 times, 2 of the first three were red, but then cards 4 and 5 are black. Of these 6, 2 times the next card is black and 4 times it is red.

    So far so good. I win 30 of 51 times. But nobody.really at #96 is right. There are 19 instances where after 5 cards I still haven’t raised my hand. The problem is that of those 19 instances, in 5 card 6 is red, and in 5 card 7 is red, and in the remaining 9, the cards are both black. So no matter what I do with the last 2 cards, I lose 14 of 19 times.

    And my odds of winning are 50% in aggregate (30+5)/(51+19). I don’t see how this changes with more cards. This assumes that I receive a prize if I get a red card, but nothing bad happens if I get a black card.

    If, on the other hand, let’s say I win $1 on red, lose $1 on black, and can choose to not raise my hand. Then in this example, I win 30 times, lose 21, and break even 19.

    FYI, I treated this as a 8 bit number where exactly 4 bits are set. a 52 card deck is simply a 52 bit number with 26 bits on, and 26 off. All the calculation has to do is go throught the bits in sequence and look for an instance where more bits are set than off.

  110. 110 110 Al V.

    P.S., in my 3rd paragraph, the card numbers should have been 7 and 8, not 6 and 7. I erred in transposing my notation from bitwise to card counting.

  111. 111 111 Jonathan Kariv

    Asked this one around. My advisor had a nice solution. Notice that at each stage you do as well to draw the last card as the next card. Therefor your probability of winning is always the same as the probability that the last card is red. i.e. 1/2

  112. 112 112 Jack

    My only advice is if you do choose not to raise your hand before the first card is drawn, then do keep track of what has happened as you go through the deck. Certainly if you go through 10 cards and they’re all black, then the odds that the next card is red greater than 50%.

  113. 113 113 Ken B

    @110: Yes, that was one of mine and also Ramprasad earlier.

  114. 114 114 Ken B

    RP Long wrote: “After observing 5 black cards in a row, … how do I make the argument that the probability of the 6th card’s being red is something other than 26/47? ” By making a mistake.

    (Hey, you asked :) )

    26/47 IS the probability given that the first 5 are black. If there are r reds left and b blacks left the odds are r/(r+b). That’s true for each remaining card, including the bottom one.

  115. 115 115 Ken B

    @Kirk 103: That’s the Good Puzzle Theorem. But in this case there IS a good “aha” answer: you are just guessing the bottom card. So it’s covered by the GPT.

  116. 116 116 Ken B

    @Capt Parker 102:
    You can always solve these things by going back to the sample space and counting. That’s how proabability works. That might be tricky, it might be easy, but in principle it is right. If it seems that it isn’t right then you have misunderstood or your specification is incomplete. Let’s say you are right that 95% of sequences are favorable at some point. That doesn’t help because you cannot state your strategy in terms of that “some point”. You have to give a rule. So if your rule is when 2 more blacks drawn then reds you never get to a case where there are 3 more blcaks drawn.

  117. 117 117 Olivier Chassé St-Laurent

    @115:

    And any rule that properly defines a stopping time will not improve or worsen the outcome. I say properly because the rule ‘never raise my hand’ does not define a stopping time, and is the only possible rule that would worsen the outcome.

    Counter-intuitively, even a rule such as ‘as soon as there are more black cards (1, or 2 or more) or on the last draw’ (the latter part is important) will not worsen the outcome.

  118. 118 118 David Sloan

    #100: I didn’t know for sure that mixing int and Fraction would do the right thing, but apparently it does. My code is a little simpler now, thanks. :)

    #111: It doesn’t help at all to keep track of anything. Your odds from standing are always identical to your odds from continuing, until there’s only one card left, at which point standing is forced.

  119. 119 119 iceman

    94 – “I estimate that the average number of cards dealt when I get my target count of 2 is 26 cards”

    I think this may be a source of the ‘intuitive dissonance’ – the issue is when you *first* hit your rule, i.e. it is path dependent which in this case seems decidedly front-loaded i.e. when the odds it will help you are worse (e.g. only 52% if after 2 cards). Not a proof, but if you are conceding the game 14% of the time (not 5% that was for +1), you need a success rate of 59% on the 86%, and I have a hunch that is about the right amount of front-loading.

    103 – “I do have difficulty describing why, “I will pick the last card UNLESS the count gets to +5″ would not work out in the long run.”

    I think the answer is if each strategy separately is 50/50 (a priori) then combining them changes nothing. You’re really just using a rule of t+5 up to the last 5 cards and then switching to pure last card.

  120. 120 120 Dave

    I would flip a coin before each card is dealt and raise my hand if it turns up heads.

  121. 121 121 Capt. J Parker

    Jonathan Kariv 110,
    The probability of the last card being red is not always 1/2. The probability that the last card is red GIVEN that you have drawn 10 blacks and only 9 reds is slightly more than 1/2. Per my simulation in 77 I’m confident that you will be in a position where the probability that the last (or next) card is red is slightly greater that .5 at some stage in about 95% of all shuffels.

    Ken B 115
    95% of the time there will be 1 more black drawn than red. If the rule is raise you hand when that happens then I’m betting at better than 50% odds 95% of the time. I took your argument to be that no matter what the rule, because of symmetry, I will anly be betting with improved odds 50% of the time. Your earlier point about being screwed by the last 5% is well taken, I can’t disprove that that would happen but, if we agree about my assertion that 95% of bets will be slightly favorable then I find a symmetry argument a little harder to follow is all.

  122. 122 122 nobody.really

    I would flip a coin before each card is dealt and raise my hand if it turns up red. Then I’d put a band-aid on that hand.

  123. 123 123 nobody.really

    If someone wanted to do some research, I’d be curious about the ratio (length of total responses to Landsburg post)/(length of Landsburg post). I have to suspect we’re getting close to a maximum here.

    To misquote Churchill, never in the field of game theory have so many words been spilled to address so few….

  124. 124 124 RPLong

    Capt. J Parker #120

    That is exactly what I was thinking. I was going to calculate the probability of experiencing at least one more black than red at some point during a 52-point non-repeating sequence, but then I had to get actual work done. lol.

    I am not 100% sure, but reasonably confident that a majority of games will have at least one such instance. That gives me slightly better than 50% odds at best. I *think*.

  125. 125 125 Neil

    What makes this problem so interesting is that intuitively it seems so likely you can improve your chances just by waiting. The probability of a red card is doing a random walk, so just wait until it walks in your favor. With 52 draws, it seems so likely. But the probability of moving in your favor always offsets the gain in probability that you are waiting for.

  126. 126 126 nivedita

    How about this variant of the problem: You start with $10, and instead of getting just one chance, you can bet on the color of the next card, any amount between $0 (no bet) and all the money you have at that point. What is the optimal betting strategy, and what is the expected final wealth?

  127. 127 127 Olivier Chassé St-Laurent

    @125: In this case, one can use the martingale betting system to obtain a sure gain. Since you can lose at most 26 times, one needs to adjust the initial bet so that one is certain of having enough money to double the bet the 27th time.

    With an initial bet of (and sure gain of) $1, one would need $134 217 727 to cover it. Simply divide the initial wealth by 134 217 727 to get the initial bet.

  128. 128 128 nivedita

    126: I don’t think that’s right though. A good strategy should vary the bets according to the cards that have already been drawn. Eg on the first draw, you don’t bet because you have no edge. On the second draw, you have a small edge because you’ve seen the first card, so you should bet some amount on the more likely color. And so on.

  129. 129 129 Mike H

    @Andy #81 I would wait for all the black cards to run out then raise my hand. If the reds run out first I wouldn’t raise my hand at all.

    You win if and only if the last card is red. So your chance of winning is 50%. I already proved that in my comment #72

    @Jeff #97 but the arguments of Mike H and Ken B are persuasive whew I was beginning to think nobody noticed!

    @nivedita #125 What is the optimal betting strategy, and what is the expected final wealth? I Like!

  130. 130 130 Mike H

    @nivedta #125:

    Let Y(r,n) be the expected amount you multiply your winnings, if there are r red cards out of n, and you bet optimally

    The right strategy is:

    * bet everything, if r/n * Y(r-1,n-1) is more than (n-r)/n * Y(r,n-1),
    * bet nothing otherwise.

    I used an excel spreadhseet to compute Y(r,n). It appears that the optimal strategy can be simplified even further:

    * bet nothing, if there are any black cards left
    * bet everything if all the remaining cards are red.

    With a $10 initial bet on a 52 card pack, your expected winnings are $50.41.

  131. 131 131 Nathan Ashby

    I don’t have the ability to test all possibilities with a 52 card deck, but here is a strategy that appears to work for 6 cards. My strategy is to raise my hand when there are at least twice as many red cards left in the deck as black. If this situation does not occur, raise my hand on the last card.

    6 card simulation: BBBRRR, BBRBRR, BBRRBR, BBRRRB, BRBBRR, BRBRBR, BRBRRB, BRRBBR, BRRBRB, BRRRBB, RRRBBB, RRBRBB, RRBBRB, RRBBBR, RBRRBB, RBRBRB, RBRBBR, RBBRRB, RBBRBR, RBBBRR are the possible sequences.

    My results for these 20 instances are: LWWWWWWWLLLLLWLLWWWW. 12 wins, 8 losses.

  132. 132 132 Olivier Chassé St-Laurent

    @127: You’re right. I only considered one ‘winning’. A combination of this approach (using the remaining number of black cards for the number of possible losses) and that of Mike H’s optimal strategy should yield the optimal results. I’m not sure that this yields the maximum expected winnings, but there is zero risk.

    Obviously, it is optimal to bet everything once all the black cards have been dealt (you need to cover only 0 + 1 bets), as per 129. But, before that, the martingale system offers a sure (though smaller) gain.

  133. 133 133 Olivier Chassé St-Laurent

    @130: I think you’ve made a slight mistake: BRBBRR (5th) and RBBBRR (20th) are classified as ‘wins’, but are losses (both are played on the fourth draw). Thus the results are 10 wins and 10 losses.

  134. 134 134 nivedita

    @Mike H: great!

    I think another interesting modification is closer to the original question. The original game essentially requires a fixed bet of $1 and only allows you to choose when to bet on red. As many commenters have shown, this has zero expected value, and indeed, the strategy is irrelevant.

    What if this is slightly modified, so you also get to choose the size of your bet? This clearly has positive expected value, and we can ask both for the strategy that maximizes the arithmetic expectation (relevant if you only play the game once), and the geometric expectation (relevant if you can play the game multiple times).

  135. 135 135 nivedita

    @Mike H: though my original modification was symmetric wrt red/black since you also get to choose the color you bet on. I think that might make the strategy a bit more complex for the geometric mean case: with the regular expectation, it’s still linear in the amount you bet, so the optimal strategy would be all or nothing with some threshold on how good the odds are.

  136. 136 136 Mike H

    @nivedita #135, ok, I solved the asymmetric problem where you have to bet on red, or not bet.

    For the symmetric problem, all strategies are equivalent in terms of expected gain, it doesn’t matter what you bet or when.

    Strategies differ only in the variance of their expected gain.

    Proof:

    Let Z(b,r) be the expected multiplier for your money, starting with b black and r red cards. The best strategy, when there are b black and r red:

    if b=r=0, the game is over, and Z(0,0) = 1.
    if b=0 or r=0, bet everything on the colour that remains, Z(0,n) = Z(n,0) = 2^n

    otherwise:
    * bet everything on ‘black’ if b*Z(b-1,r) is more than r*Z(b,r-1).
    * bet everything on ‘red’ if b*Z(b-1,r) is less than r*Z(b,r-1).
    * it doesn’t matter if b*Z(b-1,r) equals r*Z(b,r-1).
    then, Z(b,r) is the larger of 2*b*Z(b-1,r)/(b+r) and 2*r*Z(b,r-1)/(b+r).

    Theorem: Z(b,r) = 2^(b+r) * b! * r! / (b+r)!

    Proof: This is true when b=0 or r=0. Suppose the formula is correct for Z(b-1,r) and Z(b,r-1).

    Note first that b*Z(b-1,r) equals r*Z(b,r-1). Hence, it doesn’t matter what strategy I choose.

    My expected payout is 2*b*Z(b-1,r)/(b+r), which is 2*b/(b+r) * 2^(b+r-1) * (b-1)! * r! / (b+r-1)!, which is 2^(b+r) * b! * r! / (b+r)!. This completes the inductive step.

    Hence, as long as we always bet all our money on certainties, the expected outcome is independent of the strategy chosen to bet, and is given by the formula above.

    In particular, if I start with $10 and a full deck, my expected final money is $90.813.

    My intuition says that the strategy to minimise risk is to never bet until all the cards have a single color. However, I haven’t proved this.

  137. 137 137 Nathan Ashby

    @133 Damn, you’re right.

  138. 138 138 Ken B

    @121 test your idea carefully against the small example in 131 after reading 133 and 137.

  139. 139 139 Henry Buckley

    I ran a few million-iteration simulations, to find out exactly how the probability advantage of playing only when the excess of observed black cards vs red cards reaches a certain number is balanced by the probability disadvantage of having to wait until the last card.

    The results are certainly consistent with the theoretical win probability being 50% regardless of strategy, but also show that as you increase the required excess of black over red before playing, the probability of winning *when you play before the last card* increases as you’d expect, but the number of times you actually play before the last card drops rapidly, and the advantage is always balanced by the small drop in probability of winning if you have to wait until the last card.

    Strategy 1: only play before last card if n(black) – n(red) = 1

    Win %: 0.499969%
    % played before last card: 96.27%
    Win % when playing before last card: 51.93%
    Win % when waiting until last card: 0.0%

    Strategy 2: only play before last card if n(black) – n(red) = 5

    Win %: 0.500081%
    % played before last card: 38.80%
    Win % when playing before last card: 59.56%
    Win % when waiting until last card: 43.95%

    Strategy 3: only play before last card if n(black) – n(red) = 10

    Win %: 0.499723%
    % played before last card: 2.088%
    Win % when playing before last card: 69.23%
    Win % when waiting until last card: 49.56%

    Strategy 4: only play before last card if n(black) – n(red) = 15

    Win %: 0.501012%
    % played before last card: 0.0108%
    Win % when playing before last card: 75.00%
    Win % when waiting until last card: 50.10%

  140. 140 140 Kirk

    It takes so long to realize that any strategy you use to decide to pick the next card results in the same odds (at that time) as waiting to pick the last card (at that time) so you must have no advantage over the strategy of picking the last card.

  141. 141 141 Blake S

    AHEM. I have a strategy that will allow me to get 26 prizes. I’m not sure if a philosopher’s answer to a mathematician’s puzzle will satisfy any of you, but consider: What is the penalty for false negatives?

  142. 142 142 Blake S

    And the penalty for false positives?

  143. 143 143 Steve Landsburg

    Blake S: A false positive (you didn’t raise your hand, but the next card is red), carries no penalty. A false negative (you raised your hand, but the next card is black) ends the game.

    (In case this wasn’t clear—you get to raise your hand only *ONCE*.)

Leave a Reply