The solution to Tuesday’s puzzle is under the fold:

So I have a well-shuffled deck of 52 cards. I turn them over one by one, starting at the top. You can raise your hand whenever you like. After you raise your hand, you win a prize if the next card I turn over is red.

Okay, you’ve just raised your hand. At this point I say to you: “Hey. Would you mind if I turn over the bottom card instead of the top one?”. It clearly makes no difference whether you say yes or no. If, for example, I have 17 cards left, 12 red and 5 black, then you’ve got a 12/17 chance of winning, regardless of whether I turn over the top card, the bottom card, the middle card, or any other randomly chosen card.

In other words, we might as well change the rules of the game so that I deal from the top until you raise your hand, and then I deal one card from the bottom. What’s your chance of winning that game? Regardless of your strategy, you win when the bottom card is red and lose when it’s black — so regardless of your strategy, you win with probability 50%.

In summary:

- If I deal the last card from the bottom, your strategy doesn’t matter.
- If I deal the last card from the top, your chances of winning — under any strategy — are exactly the same as if I deal the last card from the bottom.
- Therefore, when I deal the last card from the top (i.e. in the original game) your strategy doesn’t matter.

You can plan to raise your hand the first time you’ve seen more blacks than reds, or the first time you’ve seen more reds than blacks, or the first time you’ve seen 20 blacks, or any other damn thing. Your chance to win will still be 50%.

You can find this solution in the comments on the original post. Ramprasad got it in Comment #2(!), and others, including Ben Kenendy (#92 there), spelled it out in nice detail. My own solution was pretty much the same as Mike H.’s (comment #72 on the original post). But I like this one much better.

Hat tip to Richard Stanley of MIT.

This seems so surprisingly wrong to me that I have to wonder whether I’m just really confused right now. Anyway, here’s what I think:

Of course the card at the bottom of the deck is either black or white, but the probability you should assign to it being either one depends on the information you have about the deck it’s drawn from (i.e. the sampling distribution.)

I would put it to you that the reason you assign a probability of 50% to the card being either colour is that you know it to be part of a deck in which 50% of the cards are red and 50% of the cards are black.

Assume I remove 3 black cards from that same deck. The remaining deck now has 26 red cards and 23 black cards. Do you still assign a probability of 50% to the card at the bottom of the deck being either colour? If so, why?

Now assume that I remove *all but one* card from the deck. The remaining deck has one card. We get to count the cards of each colour that we have removed from the original deck. If we have removed 26 black cards and 25 red cards from the deck, we know that the remaining deck has 1 red card and 0 black cards. Do you still assign a probability of 50% to this card being black? If so, why?

I think your first commenter to give the correct answer was Sriram Gopalan in comment #16.

Oh I see where I went wrong. I assumed you get to bet on the colour of the card, when in fact you only win if it’s red. My bad.

That makes sense; the hard part is seeing what’s wrong with this strategy: “Since the proportion of remaining red cards fluctuates up and down, just wait until there’s a point where there are more reds left than blacks, and if you raise your hand then, your odds have to be better than 50%.”

The fallacy is that about 4% of the time, you can deal through the whole deck without there ever being more red cards left than black cards (I determined the 4% empirically using a computer program since I don’t know a way to calculate it). This strategy is guaranteed to lose in those cases, and that exactly cancels out the advantage that you gain in all the cases where there is a point at which there are more red cards left.

#1 – your 4% is very close to Capt. J Parker’s 5%. It took me awhile to see this.

I think I disagree.

In the 4% of runs Bennett mentions, “pick the last card” would also have failed. I’m confusing myself now, but have quickly tested it empirically and unless my code is faulty (not that unlikely tbf) a “hand up if black > red + 1″ strategy seems to be better than 50/50 – only just, but I think it proves the principle.

I will post my VB script if people won’t be upset with the spam? (67 lines)

I think this may be a situation requiring something like a Bayesuan approach.

Assumptions:a standard 52-card pack is shuffled perfectly giving a random distribution of the cards,and no cheating.

When the cards are first shuffled the card at any point in the pack is equally likely to be red or black – the probability of its being black (or red) is 0.5.

So,if the probability of the bottom card on the pack is 0.5 when you start, and you don’t do anything to the bottom card, how can it become more likely to be red rather than black (or vice versa)?

Consider this possibility. As cards are dealt of the top of the pile, at some point all 26 cards of the opposite colour to the bottom card will have been dealt, while the bottom card remains unseen. At this point the probability that the bottom card is red will be 0 if ask the red cards have been dealt, or 1 if all the black cards have been dealt.

So the probability – the posterior probability in Bayesian terms- of the bottom card being a particular colour clearly does change as more cars are dealt.

Unfortunately the decision to opt in to the bet presumably has to be taken at the time the cards are shuffled, before any are dealt; andtit is the probability at this time time which matters.

I’m going to push back on this one.

You’ve shown that the odds are unaffected by whether the dealer deals from the top or the bottom of the deck, but this does not establish that the odds are 50/50. (Your solution is very close to the Monte Hall Fallacy, but not identical.)

Suppose there are 3 cards left. If more blacks than reds have appeared to date, the odds of winning are unaffected by whether the dealer takes the next card from the top or the bottom: they are 2/3 in either case, not 50%.

The reason the “wait until more black cards have been turned” strategy doesn’t work is that, although you can win more often than not when you raise your hand, there are just enough paths where you never raise your hand, that your odds remain 50/50 a priori (they may get better, or worse, as the game progresses).

Bennett Haselton: Your “approximately 4%” is exactly 1/27.

As R.P. long pointed out, we are not drawing balls from an urn. That would be the same as Steve shuffling after each card was dealt. What happens in this case?

Following the logic from Thomas Boyle in 3, it makes no difference – you still end up at 50% because 4-5% of the time (following a simple majority strategy) you end up losing. Is that the case?

Suppose I wait for the first 26 cards to be dealt and they are all black. If I raise my hand at that point I have a 100% chance of winning.

Thomas Boyle was #3 when I wrote, but now he is #7

Fopome #5 – you have to stick to the same strategy you decided before the start. If your strategy is pick bottom card I think we can agree it is 50%. There is no need to single out those cases that would be significant if another strategy had been adopted.

#10. You have to adopt the strategy at the beginning. Wait until the first 26 cards are black is not a winning strategy. Any more than wait until by the 51st card there have been 26 blacks. In the latter case you would succeed in the 50% of cases where your condition was met, but would fail in those 50% of case where it was not.

Rob Rawlings:

uppose I wait for the first 26 cards to be dealt and they are all black. If I raise my hand at that point I have a 100% chance of winning.Yup. And if you choose the winning lottery number, you have a 100% chance of winning the MegaMillions. Do you want to conclude that you’ll win the jackpot 100% of the time?

My point was that although you start with a 50% of chance of winning before the game starts those odds will change as the cards start being dealt.

Rob,

The problem is that the odds of the first 26 cards being red is the same as the odds of the first 26 cards being black. But in one case after the 26 cards are shown you have a 100% of winning and in the other case after the 26 cards are shown you have a 0% chance of winning.

The average of 0 and 100 is 50.

I also at first figured you might as well wait to see what happened. for some reaso I too only focused on the good scenario of coming across more black than red cards. I have no idea why I did this but you can’t forget about the red cards potentially hurting you just as much as black cards could potentially help you.

I think I’m there.

Let’s compare strategy 1 “Hand up only when more blacks are turned than reds”, with strategy 2 “Choose the last card before seeing any flipped over”.

If we follow strategy 1 and end up putting our hand up, by definition we have a greater than 50% chance of winning. However at this point I’ve also learned that strategy 2 has a greater than 50% chance of winning with this deck. The information I’ve learned doesn’t give me any more information about the next card than the last card in the deck… which is unlike Monty Hall, where I learn more about the doors not picked than the door I have picked.

As such, at that point it’s still just as good to choose the last card as the first card… I just know more about my chances of winning either way.

If – therefore – at any given point “pick the last card” and “pick the first card” are equal, you might as well do so up front when it’s 50-50, and therefore the odds of all strategies must also be 50-50.

The knowledge is of which of those 50s you’re in for a given run.

On reflection, I think I’ve just re-phrased the OP in a long-winded way.

Steve, you say “Regardless of your strategy, you win when the bottom card is red and lose when it’s black — so regardless of your strategy, you win with probability 50%.” But why doesn’t seeing the the color of the first N cards change your assessment of how likely it is that the bottom card is red?

I think Bob Kennedy’s example is a good way of thinking about it. Let’s say instead of choosing between red and black, I win only on the Ace of Spades. Obviously, before any cards are flipped my chance of winning is 1/52. After one card is flipped, my chances are 1/51, and after two cards are flipped my chances are 1/50, etc.

So, given that my odds continually improve, I might think that my optimal strategy is to wait until 51 cards have been flipped, and then I am guaranteed to win. The problem is, of course, that the odds are 51/52 that I would have lost before I got to the last card.

After n cards are flipped, my odds of winning on the next card are 1/(52-n). However, my odds of getting to that point are (52-n)/52.

For example, let’s say that I decide in advance I will raise my hand after 20 cards. My chances of having lost before I get to the 21st card are 20/52. Thus, my chances of winning on the 21st card are 32/52 X 1/32 = 1/52.

Whether we’re looking for 1 card or 26, the logic doesn’t change, just the specific odds.

We need a post on the Sleeping Beauty problem

I don’t buy it. Seb Nickel has a good argument against the solution. I’m betting that an airtight way to analize this is a modification of the table described here:

http://puzzles.nigelcoldwell.co.uk/fourteen.htm

@Al V thank you. That’s the bit I was missing. I had the sense that we were trying to compute two related probabilities – the probability of “being in a better-than-50% state” and the probability of winning when in that state. What I take you to say is that however large your winning probability may be it’s exactly balanced out by the increasingly small probability of getting to that state. And then summing the series you get .5

As stated in the problem description you only have to raise your hand when you want to. You could adopt a winning strategy by only betting when your odds are greater than 50%.

Is there an implicit rule (or one that I missed) that you lose if you don’t raise your hands for the whole game ?

@Keshav 17:

Keshav I confess I am shocked to see you of all the posters here ask that!

Of course it changes the probability. That is one of the conditional probabilities that you are summing over when you evaluate the strategy at the the get go. The point is, what, over all possible sequences, is the sum your strategy produces. Pick a rule. It does well on some sequences, as you note, but does *equally poorly* on other sequences.

Your strategy is fixed ‘before’ Steve deals. Agreed? It’s defined on the sample space. And the game is equivalent to guessing the bottom card. So you pick a strategy and then Steve shuffles. You really argue your choice of a mapping on the set of sequences affected the shuffle?

(I posted a more detailed explanation with an isomorphism on the other thread.)

It seems several comments are raising some version of the same misguided objection. In any particular game OF COURSE the odds that any remaining card is red or black depend on what has been turned over so far. Pointing this out is obvious and Landsburg’s own example (12 red, 5 black remaining -> 12/17 chance that ANY remaining card, including the last, is red) clearly demonstrates this.

The problem, however, asked for your strategy for playing the game, so you have to come up with some general rule that tells you when to raise your hand and the claim is that no strategy will beat 50/50 in the long run (obviously ANY strategy might win in a particular instance).

Unless you can show how a strategy beats 50/50 in the long run, there is no point in saying that after X number of cards we know the odds are no longer 50/50.

Let me follow up on my note in 22.

What ever strategy you pick it can be written as a rule, it can be coded as a program.

So let’s pretend Steve is running a little late. He tells you he’ll shuffle while you put your program in the computer. But because he’s running late, he doesn’t actually finish shuffling until you have finished loading your program.

Anyone want to argue that your program affected the chances the bottom card is red?

Rob,

Do you disagree with the following statement:

“After Y cards have been turned, the odds that there are X more black cards than red cards is equal to the odds that there are X more red cards than black cards.”

If you agree with this statement, then it should be intuitive to you why waiting for Steven to turn any cards over is not beneficial to you. That is, you might as well raise your hand before te first flip.

#21: “Is there an implicit rule (or one that I missed) that you lose if you don’t raise your hands for the whole game ?”

The problem said you win a prize if you raise your hand and a red card is turned over. If you never raise your hand it would seem to follow that you don’t win a prize -> you lose.

Yes not raising your hand = conceding the game. Any strategy that involves scenarios where that is expected to happen must account for those ‘losses’ upfront.

Capt you’ve made this a more fun discussion but #139 on the previous post seems pretty airtight. Note as I also commented there that your statement “the average number of draws it takes to hit my +2 rule is 26″ is incorrect — it’s path-dependent i.e. the average number before you *first* hit +2 is less (where your odds are less advantageous)

Jack: yes, I agree with that statement. However I would only raise my hand if more black cards than red cars had been turned over. I could in fact increase my odds of winning to 100% by only raising my hand when the first 26 cards were all black. Admittedly this would be playing something of a long game given the frequency this would occur.

I wonder what Monty Hall thinks of this….

#21 – Even if you are forced to “take”the last card as your choice, you still lose on those deals where there are never more blacks than reds. In these cases the last card must be black.

Any clarification on the drawing balls from a jar version – or the shuffle after each dealt card?

If I may presume to say so I think there is a lesson here. If a probability problem is confusing, and many are,

try to picture the sample space. All the errors here flow from not envisioning and considering the entire sample space.The stated solution sounds right to me, but I think the opposing arguments could be trounced more thoroughly. Does the solution imply that for an infinite fair deck of cards the prior odds of the turnover pile

evercontaining more blacks than reds is 50/50?Thomas Boyle

“Is there an implicit rule (or one that I missed) that you lose if you don’t raise your hands for the whole game?”

Well, if you can win simply by never raising your hand and letting steve deal the whole deck, it wouldn’t be a very interesting game.

#28: “I could in fact increase my odds of winning to 100% by only raising my hand when the first 26 cards were all black. ”

But as Landsburg pointed out this is like saying you could increase your chances of winning the lottery to 100% by only playing when the winning numbers were the ones you picked.

Once again, the goal of this game is to come up with a strategy, or a general rule, that tells you when to raise your hand. If you’re rule is wait for 25 black cards to be turned over to start the game” you obviously have a 99.99999999….% chance of losing.

If your rule is wait until there are X more red cards than black cards remaining, then you certainly have a much greater chance of winning, but in any event it’s still not greater than 50%.

BTW, the Monty Hall references are all misplaced. Monty does not open a curtain at random, he always opens a goat, or whatever.

Brian: I think Steve needs to define “winning strategy”. If it means “The next card is red on a hand raise more than 50% of the time” then a winning strategy is possible.

If it means “Raise you hands and correctly call red in more than 50% of the games played” then there is indeed no winning strategy.

I think it is implicit that the second definition is the one Steve has in mind.

@32: If the deck is infinite and fair then so is that deck with any finite number of cards removed. But rather than an infinite deck a coss toss is easier to visualize. So 50-50 always.

It is nitpicking, Brian, but Steve didn’t specify how many times you could play either, and whether or not the game ends when you don’t raise your hand.

Rob,

Then why would your hope of encountering more black cards at any given point override your fear of encountering more red cards?

Apologies to Thomas Boyle. I meant to address my comment #33 to Rob Rawlings.

Neil: The puzzle only yields the answer that Steve proposes if you assume that “winning strategy” means having a strategy that gives you a greater than 50% chance of correctly calling a red card on a single play of the game.

That wasn’t how I initially read it though.

(Don’t see how whether the game is interesting or not is relevant.).

The comment numbering got messed up. All numbers after 20 have been changed by the presence of a comment which I presume was held for moderation. So many comments now contain incorrect references.

What if we modify the game:

You pay $1 to play, get $2 if you win, $0 if you raise your hand incorrectly, and $1 if you never raise your hand. Then you get a winning strategy by waiting right?

Otherwise Steve’s assumptions are all correct. I don’t know how it can be anything else.

I’d like to make an addendum to my game in 44, you can only play once, when is the best time to enter the game in this case?

And by *enter the game*, I mean raise your hand.

Daniel, one strategy in your version would be to simply wait until 51 cards are turned. If 26 black cards have been turned, raise your hand; if 25 black cards have been turned, do nothing. On average, you will net $0.50; $1 if the card if the last card is red, $0 if the last card is black.

To do better without waiting until the last card, there would have to be a greater than 75% probability that the next card at any point is red, which is unlikely, but possible.

So the winning strategy is:

- raise your hand if more than 3/4 of the remaining cards are red.

- if it never happens, then raise your hand if the last card is red.

- otherwise, don’t raise your hand.

Note: this is a solution to Daniel’s alternate game, NOT to Steve’s original, where it makes no difference when you raise your hand.

Ken B’s advice to consider the sample space really clarified it for me. Since four cards (2B 2R) is ultimately the same game as a full deck, here’s the sample space and how often you’ll win with the “wait until black is ahead” strategy:

RRBB – Lose

RBRB – Lose

BRRB – Lose

BRBR – Win

BBRR – Win

RBBR – Win

Neil: “[Monty Hall] always opens a goat”

That was why the audience had to wear scrubs: so much blood.

Steve, here’s a link on a similar proposal, which might make a good blog post, and also ties into one of your earlier blogs about having too many books.

http://www.theguardian.com/books/booksblog/2008/jul/23/tofindyourperfectnovelsee

David (48): The interesting thing is that in the last 4 cases the discard pile actually will contain more black than red at some point, so it looks like the strategy works. It’s just that in the two cases where you never get a good pile you’re totally screwed, but in the 4 cases where you do get a good pile your chances are good but not great, so it cancels out.

#48 and #32- Just worked through the same sample space with 4 cards for shuffling after each card is dealt (using the raise hand if more blacks strategy). It is of course 50%: 1 + 1 + (3 x 1/3) loose, 1 + (3 x 2/3) win.

It was not obvious to me that these are the same game, although I had concluded they probably were. Looking back at Steve’s explanation – he says “regardless of whether I turn over the top card, the bottom card, the middle card, or any other randomly chosen card.” This covers the shuffling option. Picking a random card is the same as picking the bottom card.

@Harold, I did the same thing. I tried with a 4 card deck, a 6 card deck, 8, then 10. In every case, the odds of winning worked out to 50%. So that led me to wonder, if there is a winning strategy with 52 cards, at what possible size of deck does a discontinuity occur?

Let’s hypothesize that there is a strategy that wins 55% of the time with a 52 card deck. We know that all strategies win 50% of the time with a 10 card or less deck. So presumably, there is a point when cards are added to the deck where the odds are above 50%, but that is totally illogical. Why would, say, the odds always be 50% for a 20 card deck, but 51% for a 22 card deck. That would make no sense, and that is what actually convinced me that the odds always have to be 50%, no matter the size of the deck.

@Al V:

Yes. A good exercise. You are actually doing an induction proof there. One commenter on the earlier thread presented a more formal induction argument, and I think you would get something out of looking at it.

@AlV 47: No, your solution is not correct. You see, if 75% of the remaining cards are red, the probability that the last card is red is already greater than 50% (it is 75%), so your expected win is no longer 0.5 but 0.75. So, stopping there would not be optimal strategy.

The optimal strategy for the game you consider is: always wait for the last card.

@Bennett#3, Landsburg#8:

I can confirm the 1/27 chance of getting to the end of the deck without having seen the remaining red count exceed the remaining black count. However, I had to do it via dynamic programming; one step up from Bennett’s simulations, since I get an exact answer, but still not very illuminating as to *why* that’s the right answer.

Steve, how’d you get 1/27?

@David Sloan 56

Let p be the number of pairs. I believe it is 1/(p+1). Proof by induction on p. Easy for p = 1.

Ken B: Yes, it is 1/(p+1). I can prove this, but my proof is a little clumsy. Rather than post it right away, I’ll wait and hope someone comes up with a better one.

Steve, this is known as Bertrand’s ballot problem (a small variant of the standard version, which doesn’t allow ties).

nivedita: Aha! I hadn’t heard of this problem before, but now thanks to you and the Internet I know all about it. Thanks!

I suspect David Sloan and I have the same idea. There’s a recurence formula in r,b. His program brute force counts them, my proof brute force expands and recombinesbinomial coefficients. Not very elegant, but i think it works.

To expand on Neil’s point (since I was hung up on the problem until just a bit ago):

This is NOT the Monty Hall problem. Monty Hall KNOWS which door has the car behind it, and so when he opens a non-car (i.e. goat) door he is giving you information about the other two doors.

To make these two problems equivalent, Monty Hall would have no idea which door held the car, and 33% of the time you would lose automatically as he opens the door with the car. Thus, opening the door gives you no advantage for your final guess – assuming you didn’t automatically lose, your probability of picking the right door if you switch only moves up to 50% instead of 67%. That 17% of extra probability that you gain by switching in the original Monty Hall problem is lost because of the probability you don’t even make it that far (guess what 1/3 of 50% is ;).

Another way of looking at it is in the original Monty Hall problem, the probability that your first guess of the three doors is correct is 33%. When Monty Hall reveals one of the other two doors as a goat, the probability of your first guess is STILL 33%! It never changed! This is why switching is an advantage in the Monty Hall problem, because Monty Hall picks the door with the goat behind it 100% of the time, that door’s probability mass has to move to the other door. The probability of the original guess never changes, ever.

Likewise, the probability of every card in the deck is set when it is shuffled. To illustrate, before the first card is drawn, guess the color of the very last card. No matter what comes up before the last card, that guess always has a 50% chance of being correct, because for every sequence that increases the odds of it being black, there is an inverse sequence that increases the odds of it being red.

Until all the reds are pulled, you cannot say whether the last card is red or black, and for every situation where all the reds are pulled at a given point, there is also a situation where all the blacks are pulled at that point. You can’t predict what the last card is until it doesn’t matter, and your odds of reaching that point are the exact same as the odds of seeing its opposite.

That still seems muddy and confusing to me, but hopefully it will help things click for someone who didn’t get with the other explanations and proofs.

bigjeff5: “[..] before the first card is drawn, guess the color of the very last card. No matter what comes up before the last card, that guess always has a 50% chance of being correct…

This is confusing. You always have a 50% chance at the start of the game, but there’s nothing stopping you from adjusting that probability as the game progresses. I think they’ve concluded above that 26/27 times you’ll be able to wait for a favorable pile and guess with slightly >50% certainty. But the other 1/27 times you lose for sure so it balances out in the end.

Aw crap, that can’t be right.

Off topic but of interest to this blog I expect. Turing pardoned http://www.cnn.com/2013/12/24/world/europe/alan-turing-royal-pardon/

It seems nobody is taking advantage of the known quantities. My strategy is to raise my hand after the 26th black or 25th red, whichever comes first.

This gives me either 100% certainty, or 1/x where x is the number of cards left after 25 reds. Worst case, this is a 50/50 chance of resulting in a 4% or 100% success rate, but odds of getting 25 reds in a row are slim.

This strategy is right out the window if you don’t know the number of cards, but since we do, we should use that.

Four proofs of the 1/27

Som assembly required.

http://webspace.ship.edu/msrenault/ballotproblem/Four%20Proofs%20of%20the%20Ballot%20Theorem.pdf

Inutitive proof of 1/27.

It takes only one red card to interrupt the sequence of 26 black, so the other red cards are irrelevant. Throw away all but one red card and shuffle. What is the probability of the red card being the last card in the deal? Exactly 1/27.

Merry Christmas all.

Hmmm. Upon sober second thought, this intuitive proof, like many intuitive proofs, is very likely wrong.

So, I have no idea how to calculate where the 25th red card in a deck would be, but I expect it averages out to about 48. So, using my strategy from post 66, I’m figuring a 62.5% success rate. That being the average of 100% and 1 out of 4, that being the odds after card 48. Anything I’m missing?

Ken p:

Anything I’m missing?Yes. The entire content of this post.Steve says that it doesn’t matter when I raise my hand…my odds of winning will still be 50%. So, what happens if I wait to raise my hand until there’s only one card left? My odds of winning at that point are not 50%. They’re either 0% or 100%, because I will either have seen 26 reds and 25 blacks, or 25 reds and 26 blacks. Clearly, therefore, I do not always have a 50% chance to win. Steve’s solution is wrong.

That’s something he should have seen, since in the case he himself constructs during the discussion of his solution, I would have odds of slightly better than 70% that the next card he turned over would be red — whichever card it happens to be. He even mentions this himself. Assuming he doesn’t have foreknowledge of the cards and where they are, I could win with 12 of the 17 cards he might potentially turn over, and from that perspective, it doesn’t matter whether he pulls from the top, the bottom, or somewhere in the middle.

For any particular card, the statement “Either this card is red, or it is black” is true. But we’re interested in playing the odds. Knowledge of the cards that have left the deck, and by implication what cards remain, thus matters.

Demosthenes: If your strategy is to raise your hand when there’s one card left, you will win exactly 50% of the time. If your strategy is anything else whatsoever, you will also win 50% of the time.

Only if viewed before you start flipping over cards, and/or only if I have an infinite series of bites at the apple. We’re not talking about a series of coin flips here, though…we’re talking about one run of you removing cards from the deck. Since I can only win or lose with the cards remaining, my chances rise or fall based on the color of each card you remove from the deck. By the time you reach that last card, assuming I have been keeping track, I will know for certain whether I am about to win or lose.

My strategy of raising my hand at Point X might work differently given a different sequence of cards. But if I have let thirty-five cards go by and am now facing a deck I know to contain twelve reds and five blacks, I will (rightly) view my odds of winning at that point as much better than I would if I were facing a deck I knew to contain twelve blacks and five reds.

Demosthenes: If someone is thinking about buying a lottery ticket and asks you “What are the odds of winning?”, it is not helpful to reply “100% if you buy the winning ticket”.

The question posed here was: What strategy should you adopt? It is not helpful to say “Waiting for the last card is an excellent strategy if the last card happens to be red”. In order to address

the question that was posed— that is, if you want to be helpful to the guy who’s considering a particular strategy — you’ve got to compute the probability that strategy will wingiven that the deck could be in any order at all— just as, to advise the potential lottery ticket buyer, you’ve got to compute the probability of winninggiven that we don’t yet know what the winning numbers will be.The probabilities you’re computing — probabilities that can’t be computed until, say, 35 cards have been played — have no possible relevance to the question of what strategy I should adopt at the outset.