If you’re wondering what I’m up to, click on the picture.
Clearly in a town so rife with dinosaurs his car would be equipped with missiles…
I don’t get this part: “Either I’m at First Street, in which case I want to go straight with probability 1/2″. If you’re at First Street, you want to go straight with probability 1, not 1/2. The second half of the sentence gets it right: “or I’m at Second Street, in which case I want to go straight with probability zero.” What am I missing? I realize he doesn’t know which street he’s on, but that sentence isn’t about what he knows, it’s about where he is. I’m stuck here since the rest of the argument depends on this.
Unless I’m missing something, I believe there’s a typo in the paper on page 3.
“To maximize this, he chooses q = 1/4. Therefore, he would make it home with probability q(1-q) = 3/16 which is less than 1/2. Albert would have been better off with the original strategy C.”
Shouldn’t that second sentence read that Albert makes it home with probability 3/16, which is less than 1/4 (the original probability of making it home)?
Lawrence: In this early part of the paper, which follows Piccione/Rubenstein’s analysis, you’re committing yourself to a probability that you believe you can’t change in the future. So if you go straight with probability 1, you’ll continue to go straight with probability 1, and get eaten on the north side.
On the next page, I start to question the assumption that if you go straight now you’ll have to go straight in the future, in which case your objection is right on.
Lawrence: I’ve reworded slightly to (I hope) avoid confusing others. Thanks for catching this.
Joe Greene: This is fixed now. Thanks.
Can you talk us through Alberts journey home in the mark III strategy? Or perhaps one example of such a journey? I am a bit confused by the p’s and q’s. Where do we start? Albert is driving along and he cannot see any junction yet, perhaps.
If there is at least a 3/4 chance of beeing eaten by dinosaurs I’m staying at the office.
When Albert comes to an intersection, he chooses a probability of going straight. To do this wisely, he has to think about both a) the probability he’s used in the past and b) the rule he plans to use in the future.
Suppose Albert starts out planning to use probability 1/2. Suppose his rule for the future is “I will square the probability I’ve got in mind and use that”.
Then when he gets to the first intersection, he’s got the number 1/2 in mind. He’s not sure whether he’s at the first intersection and started out thinking of 1/2 or whether he’s at the second intersection and is remembering that he used probability 1/2 at the first. He knows he’s sure to reach the first, but (given the 1/2 he’s got in mind) has only a 1/2 chance of having reached the second. So he figures he’s 2/3 likely to be at First and 1/3 likely to be at second.
Now he chooses a probability q of going forward, accounting for the fact that at the next intersection, he’ll choose a probability q^2 of going forward. If he’s at First, he’ll get home with probability q(1-q^2). If he’s already at Second, he’ll get home with probability 1-q. Overall, his probability is (2/3)q(1-q^2) + (1/3)(1-q). He maximizes this expression and, because he is good at calculus, sets q=.408 .
Oops! Albert has deviated from his own “squaring” rule, because .408 is *not* the square of the 1/2 he started out with. Therefore “always square” is NOT a Mark III solution.
A Mark III solution is a rule with the property that if you expect to use it in the future, you’ll want to use it now.
There’s a theorem in the paper that says the *only* Mark III solution is “If you’re thinking of a number other than 1, change it to 1. If you’re thinking of 1, change it to 0″.
Say Albert expects to use this rule in the future. He starts out, let’s say, with p=1/2. (The question of where he gets the original p is separate from the Mark III solution.) He therefore figures he’s at First Street with probability 2/3 and Second with probability 1/3, as before. Then he can reason like this: If I’m at First and go forward with some probability q<1, I’ll go forward at Second with probability 0 and get eaten. If I’m at Second and go forward with probability q < 1, I’ll get home with probability 1-q. Overall, I get home with probability (1/3)(1-q). But if I go forward with probability q=1, then if I’m at First, I’ll turn that 1 into a zero at second and get home for sure, whereas if I’m at Second, I’m doomed. Overall, that’s a 2/3 chance of getting home, which beats any number of the form (1/3)(1-q). So I’ll go straight with probability 1.
In other words, Albert, believing he’s going to use the Mark III solution in the future, wants to use it in the present. We just checked this is true if he starts with p=1/2, and could easily check that it’s true for *any* p. That’s what makes this a Mark III solution.
Sorry for all the words! I’m sure if I had more time I could make this more succinct.
Could you tell me which of these statements concerning the Mark III strategy in part 1 is incorrect?
a) Albert gets home 100% of the time.
b) Albert knows he gets home 100% of the time.
c) When Mark III calls for Albert to turn right, he knows he’s at the second intersection.
a) Albert never knows which intersection he’s at.
Another thing I don’t understand. In the first game you come up with a way for Albert to get home with probability 1. But in part 2 by changing the payoff structure, e.g. by replacing one dinosaur with a bobcat, you conclude that sometimes no strategy gets Albert home with probability 1. How is this possible?
Martin-2: The answers to your questions a), b), c) and d) all depend on the initial value of p. The answer to your “How is this possible?” question is that I don’t understand the question. That’s how things happen to work. Why shouldn’t they?
Steve (11): You say on page 8 “if p != 1 then … Albert gets home with certainty”. If Albert is smart he will never set p = 1. If Albert knows he’s smart he will know he didn’t set p = 1. Therefore he anticipates that q = f(~1) = 1 and r = f(1) = 0.
What seems paradoxical to me about part 2 is that for any situation in which Albert wants to go straight and then turn, he can always just pretend that both wrong moves lead to part 1-type outcomes, use a Mark III strategy with p != 1, and always get home just like in part 1. Is this not the case?
Martin-2: “What seems paradoxical to me about part 2 is that for any situation in which Albert wants to go straight and then turn, he can always just pretend that both wrong moves lead to part 1-type outcomes, use a Mark III strategy with p != 1, and always get home just like in part 1. Is this not the case?”
Suppose the payoff is 1 for ending up at the North Side, 4 for ending up at home, and 0 for ending up with the East Side Dinos.
Suppose Albert tries to use the Mark III strategy from Part I, starting with p=0.
Now he reaches the first intersection. Because p=0, he believes he’s always turned in the past, so he figures he’s at First Street and definitely wants to go straight, so he picks q=1, confident that when he gets to Second Street he’ll pick r=0 and turn for sure. That gets him home with certainty, which sounds great.
BUT what happens when he gets to Second Street? Now all he remembers is that q=1, so he’s definitely gone straight in the past. This means both intersections are equally likely. If he’s at First (which, for all he knows, he might be), and sticks to the plan r=0, he’s doomed. If he’s at Second, and sticks to the plan r=0, he gets home. Expected gain: (1/2) 0 + (1/2) 4 = 2.
But suppose instead that he deviates from the plan and sets r=1 (so that s=0, where s is the probability of going straight the *next* time he reaches an intersection). That means that if he’s at First Street he’s sure to get home and if he’s at Second he’s sure to reach the North Side. Expected gain: (1/2) 4 + (1/2) 1 = 5/2 .
So Albert knows that if he’s at First Street and sticks to the plan, he’ll deviate when he gets to Second, and that makes the plan look not so good anymore. Of course he only deviates at Second because he thinks he might be at First, which you and I will know is not the case, but Albert is absent-minded after all.
Bottom line: In your story, Albert sets p=0, figuring he’ll set q=1 and therefore r=0 and get home. But in fact, once he’s set q=1, he will want to deviate from the plan and set r=1, not 0. Foreseeing this, he won’t set q=1, and foreseeing that, he won’t set p=0.
I don’t get why we should care about Albert’s logic when he reasons a “bit further,” because his reasoning is obviously wrong.
The actual problem is thus: “you are at an intersection, what do you do?” You have only one piece of information (that you are at an intersection) and so only one decision: “what do I do at an intersection.”
But, when Albert reasons a “bit further,” he asks himself “what would I do IF I knew I were at First Street and what would I do IF I knew I were at Second Street.” But so what? He doesn’t know which street he is at, so who cares what he would do if he knew where he was?
Albert only knows he is at an intersection. The only relevant question is “what do you do at an intersection?”
To elaborate a little more:
AHP seem to be just enforcing my argument immediately above by requiring imposing the constraints in (4.1) through (4.3). i.e., they are enforcing the assumption that Albert cannot tell which intersection he is at. r, p, and q can only be different and/or useful to Albert if Albert has some information about the history of the system or where he is. But this cannot be by assumption. (It might be the case that r, p, and q are different, but unknowable to Albert, or it might be the case that r, p, and q are all the same — either way, it can’t help Albert.)
The Section 5 stuff appears to be clever ways to confuse the reader into allowing the possibility of information about the system being communicated to Albert. But, by assumption, that cannot be the case.
But, when Albert reasons a “bit further,” he asks himself “what would I do IF I knew I were at First Street and what would I do IF I knew I were at Second Street.”
No. He never (in the course of solving his maximization problem) asks these questions, precisely because he knows he will never know these things.
Zazooba: “Albert only knows he is at an intersection”
If I have this right, Steve (and previously Aumann-Hart-Perry) allows Albert to have a little more information. Albert can store a go-straight probability in his head, although he will not remember why he chose it.
If Albert is smart at all, he would just sleep in his office and order in pizza.
This is exactly correct. All previous authors (including Piccione/Rubenstein and Aumann/Hart/Perry) have allowed this, and so have I.
“If I have this right, Steve (and previously Aumann-Hart-Perry) allows Albert to have a little more information. Albert can store a go-straight probability in his head, although he will not remember why he chose it.”
To which Steve replied:
“This is exactly correct. All previous authors (including Piccione/Rubenstein and Aumann/Hart/Perry) have allowed this, and so have I.”
This is critical. I did not understand at all that Albert had some means of transmitting information across time. Indeed, the beginning of the paper pretty clearly rules this out and so needs to be edited. The paper says: “Albert can NEVER remember whether he’s already passed the first intersection. … Because Albert CAN’T TELL THE INTERSECTIONS APART.”
Language should be added to explain what is meant by this. Perhaps he can’t tell the intersections apart because they are physically identical and he has a special memory impairment that makes it impossible for him to remember where he has been. Then you have to lay out what restrictions there are on Albert’s ability to overcome his memory deficit. Given Albert’s memory problems, the natural response is that he should just put a Post-It note on his dashboard and that he should put a check mark on it when he comes to an intersection. Then he will know where he is by looking at the Post-It and will get home safely 100% of the time.
You need to do something to tell the reader why the Post-It is not permissible while game-theoretic concepts like the memory of his previous strategy are permissible. Isn’t knowledge of a previous strategy just acting as a mental Post-It for Albert in your analysis?
Isn’t knowledge of a previous strategy just acting as a mental Post-It for Albert in your analysis?
Not quite. It certainly shares some of the characteristics of a Post-It note, but it’s different for at least two reasons:
a) The *purpose* of a Post-It note is to let Albert coordinate his decisions. The purpose of his memory of a number is to let Albert calculate his optimal current behavior. This has the incidental effect of letting him coordinate his decisions.
b) Albert’s memory can’t be quite the same as a Post-It note because a Post-It note would allow him to get home with certainty every time. But (for general payoffs) Albert’s memory fails to accomplish this. With the original Piccione/Rubenstein payoffs, getting home is worth 4, getting eaten on the North Side is worth 1, and getting eaten on the East Side is worth 0. With a Post-It note Albert gets a guaranteed payoff of 4. With a Mark III strategy, he gets, at best, an expected payoff of 3.
Steve’s earlier posts on this puzzle inspired this game
Powered by WordPress3.8.1 and K21.0-RC7
Entries Feed and Comments Feed
44 queries. 0.6240 seconds.