Brain Teaser

Here’s a brain teaser I wish I’d invented in time to include it in The Big Questions:

John and Mary live in an isolated village where they have no access to reference materials, no contact with the outside world, and nobody to talk to except each other. One day an anthropologist arrives in this village, sits down for coffee with John and Mary, and quizzes them about their knowledge of the world. John says he’s sure that men have walked on the moon; Mary says she’s sure they haven’t. Never having discussed this issue before, each of them is astonished and flabbergasted by the others’ apparent ignorance. Rather than risk losing all respect for each other, John and Mary agree never to speak of the subject again. But the anthropologist mentions that she’ll be stopping by once a day from now on, and will be glad to know if either of them ever has a change of mind on this topic. If so, the anthropologist will inform the other. Otherwise, the anthropologist will never bring it up either.

The next day (a Monday) nobody’s mind has changed, and therefore the subject is not discussed. The same thing happens on Tuesday, Wednesday and Thursday. Can this go on forever?

The surpising answer, under quite general assumptions about the way people learn, is that eventually, if John and Mary care about the truth (as opposed to, say, winning a debating point) then somebody’s mind must change.

But how can this be? If nobody’s mind changed on Tuesday, Wednesday or Thursday, and if no new information or argument ever gets brought into the picture, how can Friday be different?

Here’s how: On Monday, all John knows is that when it comes to moon landings, Mary disbelieves. By Tuesday, he knows that her disbelief is strong enough to stand up in the face of his belief—which is something he didn’t know on Monday. By Wednesday, he knows that her disbelief is strong enough to stand up in the face of his continued belief in the face of her continued disbelief—which is something he didn’t know on Tuesday. And so forth.

Each time a day goes by with no minds changed, John and Mary learn something new about the strength of each others’ beliefs—and therefore find it harder to maintain their own beliefs, because, after all, there’s always a chance the other guy is right. And the surer the other guy is, the better that chance—at least on the assumption that the other guy is at least trying to arrive at the truth.

The fact that John or Mary must eventually break down in the face of this barrage takes some work to prove; the first round of that work was done by the Nobel prize winning economist Robert Aumann, who took the first step toward showing that it’s essentially impossible for honest truthseekers to “agree to disagree”. Other economists, especially the always innovative Robin Hanson, then took up the baton and pushed these results much further. You can read all about it in Chapter 8 of The Big Questions.

Of course, in the real world, John and Mary never change their minds; they just go off to sulk, become increasingly embittered and eventually stop talking to each other altogether. The disturbing implication of Aumann’s theorem is that therefore John and Mary cannot both be honest truthseekers. Nor can almost anyone else who’s ever been party to an ongoing disagreement. Which, most disturbingly of all, would seem to include me.

It occurs to me that this brain teaser has more than a little in common with a brain teaser that recently got a lot of play over on the blog of the Fields-medal winning mathematician Terence Tao:

There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours. Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces). If a tribesperson does discover his or her own eye color, then their religion compels them to commit ritual suicide at noon the following day in the village square for all to witness. All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth).

Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople).

One day, a blue-eyed foreigner visits to the island and wins the complete trust of the tribe.

One evening, he addresses the entire tribe to thank them for their hospitality.

However, not knowing the customs, the foreigner makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”.

What effect, if anything, does this faux pas have on the tribe?

Once again the key to the brain teaser is that silence conveys information, silence in the face of silence conveys even more information, and silence in the face of silence in the face of silence conveys even more. One difference is that Tao’s brain teaser is self-contained; you ought to be able to figure it out without reference to journal articles. The agree-to-disagree problem lies deeper, but is also, I suspect, more profound.

Print Friendly, PDF & Email
Share

37 Responses to “Brain Teaser”


  1. 1 1 Kareem Carr

    Hi Steven,

    Thank you for letting me know about your article. Perhaps, your readers will find my explanation of Terry’s brain teaser of interest:

    http://twofoldgaze.wordpress.com/2009/11/07/in-the-long-run-we-are-all-dead/

    Intriguing point about John and Mary, intuitively it makes sense to me that their procedure is a way of calculating the strength of beliefs; and in particular, a way of estimating whose belief is the lesser.

  2. 2 2 Peter

    I’m not sure how this would work. If both Mary and John are “certain”, then from a Bayesian perspective, their priors would prevent learning at all. Excluding this case, imagine that both Mary and John can do a meta-analysis, and knowing that the other has no outside information, and hence no reason to update their prior belief, and hence neither has any reason to change their mind.

    I guess I’ll need to read the book to see how it all works. But I’m absolutely certain that you’re wrong, and no amount of evidence will convince me otherwise ;)

  3. 3 3 SteveJ

    Surely what happens to John and Mary in practice, is that they do both immediately change their minds – neither is “sure” any more. You discussed what “sure” might mean a few days ago, indicating that nobody is absolutely “sure” about anything.

    If either of them could actually *reverse* their position without new direct evidence, simply based on the knowledge that the other has a firm belief, then would we need evidence about anything in the first place? Couldn’t we all be “honest truthseekers” simply by sitting in a room together in silence until we’re agreed? All very Platonic, but I’m not sure I subscribe to that definition of “truth”. I am cynical enough to accept that it is one of the major means by which we learn.

    But then, I haven’t read the results you cite. I’m curious to know which of the two of them changes their mind – their positions seem to be perfectly symmetrical, so I can only assume that each changes their mind first with probability 0.5. Again, not a very satisfactory outcome for honest seekers of “truth”.

  4. 4 4 Steve Landsburg

    Peter: Why should Mary trust her own prior any more than she trusts John’s?

    I realize this steps outside the usual Bayesian model, but Robin Hanson has argued (quite convincingly) that we should allow for Mary and John to worry about such things.

    (In Aumann’s story, everyone starts from a common prior and comes to their current beliefs through having seen different evidence. In Hanson’s, there’s no need for them to start from a common prior.)

  5. 5 5 SteveJ

    Particularly problematic to John and Mary, I think, is that they haven’t seen each others’ evidence, and never will. So this result of Aumann obviously doesn’t apply: http://www.jstor.org/pss/2958591, but then it is only the first result in the general direction of a solution to the problem.

    I don’t understand Robin Hanson, but that’s OK. I just have to wait a few more days until either he or I commits suicide in the village square ;-)

  6. 6 6 Peter

    As a fanatically religious Bayesian, I guess I’m not familiar enough with being outside the Bayesian model. The point of a prior is that it is owned by each individual. Learning that someone disagrees is new information, and should be used to update the prior. Are you aware of any numerical timetables for when someone would change their mind given reasonably strong priors for both John and Mary? If the timetable for changing ones mind is on the order of millions of years, then you might be able to consider yourself a reasonable truthseeker again.

  7. 7 7 Steve Landsburg

    Peter: The computer scientist Scott Aaronson has addressed the question of timetables and shown that with the right communications protocol they can reach agreement in a reasonable time.

    In Hanson’s model agents have probabilistic beliefs about the origins of their priors, and must believe that they would have had different priors in other circumstances. Ultimately this forces them to have the same priors, and Aumann’s theorem (which assumes a common prior) applies.

  8. 8 8 Steve Landsburg

    SteveJ: In Aumann’s paper, John and Mary do not have to have seen each other’s evidence. They only need to know that they disagree.

  9. 9 9 Rowan

    This may be due to a lack of oxygen (*hack* *wheeze*), but I don’t see the point of the village story. In Kareem’s post, where there is a small number of villagers, the information that one of them has blue eyes is indeed new. But with a thousand villagers…they all already knew that there were (likely) two colours of eyes, and that there was a 10% chance that their own eyes are blue. Having someone come along and say, “One of you has blue eyes!”…I don’t see how that’s new information to them, or how it would change their knowledge of their own eye colour. Now, if the traveller had said, “It’s so nice to meet a green-eyed person in this part of the world!”, that might have shaken things up a little.

  10. 10 10 Steve Landsburg

    Rowan: Suppose there are four blue-eyed people on the island: Moe, Larry, Shemp and Curly.

    Before the visitor arrives, Moe knows several things, among which are these:

    • There are at least three blue eyed people on the island (namely Larry, Shemp and Curly)
    • Larry knows there are at least two blue eyed people on the island (namely Shemp and Curly)
    • Larry knows that Shemp knows that there is at least one blue eyed person on the island (namely Curly)

    But what Moe does not know is that

    • Larry knows that Shemp knows that Curly knows that there is at least one blue-eyed person on the island.

    (After all, from Moe’s point of view, Shemp knows that Curly knows that there is at least one blue-eyed person on the island because Shemp knows Curly can see Larry. But how would Larry know this?)

    The visitor, by stating right in front of everyone that there is a blue-eyed person on the island, gives Moe this additional crucial piece of information, which changes everything.

    I could have done this with 100 blue-eyed people but it would have taken a lot more space.

  11. 11 11 MattF

    What about propositions that are undecidable? Suppose, for example, that John and Mary disagree about whether the Universe is infinitely large. Well, either the Universe is either infinitely large, or it isn’t– so the proposition is either true or false. However, if you believe relativity, there’s no hope for evidence that can resolve it one way or the other. What do John and/or Mary end up believing?

  12. 12 12 Mark

    Start with the case of one blue eyed person and everyone else brown eyed. (The number of brown eyed people does not really matter.) After the visitor makes his speech the blue eyed person would have to know that he has blue eyes and must commit suicide.

    Now take the case of two blues and everyone else brown. Blue number one can say to himself, well if I am brown then the other blue eyed person now sees things as in the first case and must kill himself. Therefore I must have blue eyes.

    Now take the case of three blues and everyone else brown. Again one of the blues can say to himself, well if I am brown then the other blues now fit the previous scenario, and on and on…

  13. 13 13 Steve Landsburg

    Matt F: John and Mary converge on a single belief along the lines of “There is a 60% probability that the universe is infinitely large.”

  14. 14 14 Stephen

    John Allen Paulos presents the suicidal-tribe puzzle in two of his books, “Once Upon a Number” and “A Mathematician Plays the Stock Market.” But he formulates it differently.

    In his version, which he called “A Parable of Furious Feminists,” there are 50 married couples in a benighted village, and each of the women knows when another woman’s husband has been unfaithful but not when her own has. According to the village’s feminist statutes, when a woman can prove that her husband has been unfaithful, she must kill him that same day.

    As it happens, all 50 of the husbands have been unfaithful. But since none of the women can prove it (yes, they are perfectly logical and they know that all the other women are also perfectly logical, etc.), everything moves along merrily. Until one day a matriarch arrives and informs the village that one of the husbands has been unfaithful.

    You can see how this leads to a bloody massacre a few months later…Paulos wrote this parable after the stock market decline of 1997, in order to explain how common knowledge can cause a group of people to suddenly do the same thing at the same time.

  15. 15 15 Ron

    I think the solution to the eye problem can be more clearly
    stated.

    To skip to the end, the result is rutual suicide of all blue eyes
    on the 100th day after the announcement.

    Here’s why. Let’s do this as a first-person account.

    Suppose you’re the only person on the island with blue eyes.
    You see that everyone else has non-blue eyes, but you have no
    way to know your color. That’s true until the big mouth
    gives away the fact that someone has blue eyes. You know
    it’s not anyone else, so you suicide at the next opportunity.

    Suppose you’re one of the only two people on the island with blue
    eyes. You see only one person with blue eyes, but you don’t know
    your color. Same big-moutn scenario, but this time there’s no
    suicide on the first day. Per the previous paragraph, you know
    that if the blue-eyed one you see were the only blue-eyes, then
    there would have been a suicide. But you see only the one
    blue-eyes, therefore, you must be the second blue-eyes. The
    other blue-eyes uses the same logic. Result: two ritual
    suicides on day 2.

    Three blue-eyes means no suicide on day 2, but ritual suicide on
    day three by anyone who has observed only two green-eyed people.

    Four blue-eyes means no suicide on day 1-3, but ritual suicide on
    day four by anyone who has observed only three green-eyed people.

    General rule: if you see N blue-eyed people, but no N-person
    rutual suicide occurs on day N, then that mans you’re blue-eyed,
    too. Mass ritual suicide of all of you on day N + 1.

  16. 16 16 Kareem Carr

    I am thinking of putting up a chain of reasoning that goes from top down but I am still thinking about whether it has any holes or not:

    If you can see N blue-eyed people, then it is possible that if you don’t have blue eyes then any one of those N blue-eyed people can see N-1 blue-eyed people. Therefore, you are sure everybody can see at least N-1 blues. Thus, it is common knowledge that there are N-1 blues or more people.

    So, if there are 50 people, you are reasonably sure that everyone can see at least 49. “There are 49 or more blues” is common knowledge

    However, if you think that someone sees 49 people, then that person is inclined to think that there exists someone who sees only 48 by the above argument.

    The general principle is if you can personally see N people then you can be sure that:

    Everybody knows there are at least N-1 blue-eyed people.

    Everybody knows that everybody knows there are at least N-2 blue-eyed.

    A pattern starts up.

    Everybody knows that everybody knows that everybody knows there are at least N-3 blue-eyed people.

    This can go on forever.

    Everybody knows that everybody knows that … there are at least 0 blue-eyed people.

    The stranger adds te information that:

    There is at least 1 blue-eyed person.

    Everybody knows there is at least 1 blue-eyed person.

    Everybody knows that everybody knows there is at least 1 blue-eyed person.

    Everybody knows that everybody knows … there is at least 1 blue-eyed person.

    Thus, one of the many pieces of information given by the stranger must eventually contradict the statement concerning 0 blue-eyed people.

  17. 17 17 Steve Landsburg

    Ron: I think the problem most people have with this is the following: On the one hand, they understand the argument you’ve just given. On the other hand, it seems to them that no information is conveyed by the visitor, so they don’t see how the visitor can change anything. Therefore they feel like they’ve got a completely convincing argument that everybody commits suicide on the 100th day AND a completely convincing argument that nothing happens. That’s what they find so disturbing.

    So repeating one of the two completely convincing arguments is not, I think, helpful to those people (though of course it can be quite helpful to other people who have not yet grasped that argument). What those people need is an explanation of what’s WRONG with the OTHER apparently convincing argument, which is what I tried to provide in my response to Rowan.

  18. 18 18 Snorri Godhi

    Matt F: John and Mary converge on a single belief along the lines of “There is a 60% probability that the universe is infinitely large.”

    Ah, then it need not be a total surrender of one of the two. That is what confused me at first reading: “if either of them ever has a change of mind on this topic” suggests to me a complete switch from one certainty to the other.

    A couple of side notes:
    * Either or both John and Mary might decide to go for rational ignorance. If only one of them does not care about Moon landings, then paradoxically it is the one who cares who comes closer to the view of the other.
    * In this particular example, Mary cannot reasonably be sure that no human walked on the Moon, because she can’t prove a negative. For the same reason, the burden of the proof is on John (although he would not be able to produce conclusive evidence, he should produce some sort of evidence); which means that, if I were Mary, while not sure of my position, I would not change my mind unless John brought up the subject again.

  19. 19 19 Rowan

    Yup, I’m still not getting it. I see how it works with smaller numbers, but to me, scaling it up to larger number makes a qualitative, not merely quantitative, change. At some point, as far as I can tell in my current weakened brain-state, a sufficient number of villagers, with a sufficient percentage of blue-eyed villagers, means that eventually the system reaches an equilibrium of doubt, where each day’s lack of suicide conveys nothing more than the fact that everyone else is equally doubtful that the traveller meant them.

  20. 20 20 Cos

    “I could have done this with 100 blue-eyed people but it would have taken a lot more space.”

    This is because you are assuming that they do not act like people, and furthermore, that they each assume that their fellow tribe members do not act like people. As you probably know from game theory, the behavior you posit *cannot* be scaled up to anywhere near 100, because even people who might logically follow the entire chain you’re drawing would assume that most of their fellow tribe members would not follow that chain to that extent, and would act accordingly. Therefore, nobody would act surprised, and there would be no effect.

  21. 21 21 Ron

    Rowan: If you can see how it works with smaller numbers, then you
    have the answer. In order for it to work with larger numbers, the
    statement of conditions, “All the tribespeople are highly logical
    and devout, and they all know that each other is also highly
    logical and devout.” is necessary. It means that every villager
    is a perfect logician, so the problem scales up as far as necessary.
    You just have to accept as a given that if it works for 4 blue-eyes,
    it will work for N. A population of perfect logicians is not a likely
    real-world condition, which is why it seems so farfetched.

    The foreigner doesn’t convey any information that wasn’t
    apparent in the described conditions. Everyone can see
    that there are blue-eyed people there. It’s the first-ever
    public announcement of this fact that synchronizes the clock
    ticking for the logic chains described above. The further
    needed information comes from the lack of suicides by
    perfect logicians as the days tick by.

  22. 22 22 Steve Landsburg

    Ron:

    The foreigner doesn’t convey any information that wasn’t apparent in the described conditions.

    I disagree with this. If there are four blue-eyed islanders, then the visitor conveys to Moe the new information that Larry knows that Shemp knows that Curly knows that there is at least one blue-eyed islander. Moe did not know this prior to the visitor’s announcement, and (being a perfect logician) he did know it immediately afterward.

  23. 23 23 Ron

    … to Moe the new information that Larry knows that Shemp knows that Curly knows that there is at least one blue-eyed islander

    Okay, I think I get it, now. But the importance is not only that Moe
    now knows this; it’s that symmetry applies so that also

    … to Larry the new information that Moe knows that Shemp knows that Curly knows that there is at least one blue-eyed islander

    etc.

  24. 24 24 Steve Harris

    (Coming a bit late to this.)

    Steve L., these statements of what Moe knows (given that Moe, Larry, Shemp, and Curly are the only four blue-eyes) are not accurate records of Moe’s knowledge:

    Moe knows
    (1): “Larry knows there are at least two blue eyed people on the island (namely Shemp and Curly).”

    More knows
    (2): “Larry knows that Shemp knows that there is at least one blue eyed person on the island (namely Curly)”

    Rather, what Moe knows is this:

    Moe knows
    (1′): Either Larry knows
    (a) there are at least two blue-eyes (Shemp and Curly),
    or Larry knows
    (b) there are at least three blue-eyes (Shemp, Curly, and me).

    (a): Moe considers Moe is brown
    (b): Moe considers Moe is blue

    Note that (1′) is not the same as (1). That is to say, if every islander writes down each night the maximum information he has at that time about B, the number of blue-eyes, then (1) is the assertion that Larry writes “B>N” where N is at least 1, while (1′) is the assertion that Larry writes “B>N” where N is either 1 or 2.)

    Similarly, when cataloguing Larry’s state of knowledge, Moe doesn’t know (2), but

    Moe knows:
    (2′) Larry either knows
    (a) Shemp knows either
    (i) there is at least one blue-eye (Curly) or
    (ii) there are at least two blue-eyes (Curly and Larry),
    or Larry knows
    (b) Shemp knows either
    (i) there are at least two blue-eyes (Curly and Moe) or
    (ii) there are at least three blue-eyes (Curly and Larry and Moe).

    Key to (2′):

    (ai): Moe considers Moe is brown, Moe considers Larry considers Larry is brown
    (aii): Moe considers Moe is Brown, Moe considers Larry considers Larry is blue
    (bi): Moe considers Moe is blue, Moe considers Larry considers Larry is brown
    (bii): Moe considers Moe is blue, Moe considers Larry considers Larry is blue

    (2) is the assertion that Larry knows Shemp writes “B>N” for N at least 0, while (2′) is the assertion that either Larry knows Shemp writes “B>N” for N either 0 or 1, or Larry knows Shemp writes “B>N” for N either 1 or 2.

    (1′) is stronger than (1) (if Larry’s N is either 1 or 2, then, in particular, it’s at least 1). Also, (2′) is stronger than (2) (if either Larry knows Shemp’s N is 0 or 1, or Larry knows Shemp’s N is 1 or 2, then in particular Larry knows Shemp’s N is at least 0).

    In similar fashion, when Moe calculates what Larry calculates about what Shemp calculates Curly knows, we get this:

    Moe knows:
    3′) either Larry knows
    (a) either Shemp knows
    (i) either Curly knows
    (1) nothing or
    (2) there is at least one blue-eye (Shemp), or
    (ii) either Curly knows
    (1) there is at least one blue-eye (Larry) or
    (2) there are at least two blue-eyes (Larry and Shemp)
    or Larry knows
    (b) either Shemp knows
    (i) either Curly knows
    (1) there is at least one blue-eye (Moe) or
    (2) there are at leat two blue-eyes (Shemp and Moe), or
    (ii) either Curly knows
    (1) there are at least two blue-eyes (Larry and Moe) or
    (2) there are at least three blue-eyes (Shemp and Larry and Moe)

    Key to (3′):

    (ai1): Moe considers Moe is brown, Moe considers Larry considers Larry is brown, Moe considers Larry considers Shemp considers Shemp is brown
    (ai2): Moe considers Moe is brown, Moe considers Larry considers Larry is brown, Moe considers Larry considers Shemp considers Shemp is brown
    (aii1): Moe considers Moe is brown, Moe considers Larry considers Larry is blue, Moe considers Larry considers Shemp considers Shemp is brown
    (aii2): Moe considers Moe is brown, Moe considers Larry considers Larry is blue, Moe considers Larry considers Shemp considers Shemp is blue
    (bi1): Moe considers Moe is blue, Moe considers Larry considers Larry is brown, Moe considers Larry considers Shemp considers Shemp is brown
    (bi2): Moe considers Mpe is blue, Moe considers Larry considers Larry is brown, Moe considers Larry considers Shemp considers Shemp is brown
    (bii1): Moe considers Moe is blue, Moe considers Larry considers Larry is blue, Moe considers Larry considers Shemp considers Shemp is brown
    (bii2): Moe considers Moe is blue, Moe considers Larry considers Larry is blue, Moe considers Larry considers Shemp considers Shemp is blue

    Stranger arrives and avers there is at least one blue; this means scenario (3’ai1) is impossible. So Moe says to himself, “Let’s assume I’m brown; then consider things from Larry’s point of view: Larry can say, ‘Assume I’m brown; then from Shemp’s point of view, if Shemp assumes he’s brown, Curly knows he’s the lone blue and so must kill himself–so if Shemp watches Curly, he’ll know what’s up. In particular, if Curly doesn’t kill himself on StrangerDay + 1, Shemp will know the assumption of Shemp being brown is untentable, so Shemp is blue, so Shemp must kill himself on StrangerDay + 2, as will Curly. So if I observe Curly and Shemp, and they neither one kill themselves in the next couple of days, it can only mean that my assumption of brown for myself is wrong, and I’ll kill myself on StrangerDay + 3, as will Shemp and Curly.’ So now I must watch those three mugs; if none of them kill themselves in the next three days, it must be because my assmption of being brown is wrong, so I’ll kill myself on StrangerDay + 4, as will Larry and Shemp and Curly.”

    So if there are 100 blues, each watches the others watching the others… The longer the the period passes without suicided, the more each one realizes that everyone else is realizing…

    Which is so counter-intuitive, I’m not at all sure I’ve done this right.

  25. 25 25 Steve Landsburg

    Steve Harris:

    I said that Moe knows this:

    (1): “Larry knows there are at least two blue eyed people on the island (namely Shemp and Curly).”

    You said that rather, what Moe knows is this:

    (2) Moe knows
    (1′): Either Larry knows
    (a) there are at least two blue-eyes (Shemp and Curly),
    or Larry knows
    (b) there are at least three blue-eyes (Shemp, Curly, and me).

    But (a) is the same as (1) and (b) implies (1), so [(a) or (b)] implies (1). Thus (2) implies (1). So if Moe knows (2) (as you claim) then he also knows (1) (as I claim).

  26. 26 26 Rowan

    Kareem, if I’m understanding you correctly, then you’re saying that before the stranger arrived everyone ultimately came to this conclusion:
    “Everybody knows that everybody knows that … there are at least 0 blue-eyed people.”

    But if they had, they would have also concluded that they themselves must have brown eyes, and would have all killed themselves before the stranger got there. They couldn’t come to that conclusion, fortunately, because it was patently false, what with 100 blue-eyed people running around.

    As far as I can see, it goes like this (with the original numbers):
    Each blue-eyed villager knows that there are either 99 or 100 blue-eyed villagers. Each brown-eyed villager knows that there are either 100 or 101 blue-eyed villagers. Everyone knows that everyone knows that there are either 99, or 100, or 101 blue-eyed villagers. End of story. The stranger adds no new information.

    Also, as an aside, it’s a good thing these perfect logicians never figured out genetics, or every child of two blues or a blue and a brown would have been orphaned as soon as the parents could reliably detect its eye colour. ;-)

  27. 27 27 Dave

    Prof Landsburg I am thoroughly confused. If Moe can see 3 blue eyed coloured people he knows that everyone on the island sees either 2 blue eyed colour people (as applies to Larry, Curly and Shemp) or 3 (as applies to every brown eyed person on the island if he has brown eyes) or 4 (as applies to every brown eyed person if he has blue eyed). ie every single person on the island knows that there are at least 2 people with blue eyes on the island. Just because somebody states that that there is one blue eyed person on the island doesn’t give any new information at all does it?

    I understand Ron’s solution above in the case of 1 blue eyed person only (ie he would get it straight away) or 2 blue eyed people only (ie they would both get it on day 2 when the other hadn’t killed himself). But doesn’t it fall apart if there are 3 or more? ie the visitor’s statement implies he sees one with blue eyes – but everyone sees 2 or more with blue so information that there is one with blue eyes gives nothing away…..

    or does it? slowly realisation forming in my mind

    ok and I’m slowly getting it now as I type this out.

    Can you please confirm that the blues eyes all kill themselves on day 100 and the next day all the brown eyes (having realised they they are not blue) all kill themselves too? Without counting number of days I can’t see how it can work

    is the wording complete in the question? There doesn’t seem to be any insistence that there can’t be green eyes for example which would actually save all the brown eyes from suicide (ie just because all the blue eyes have offed themselves still doesn’t give me certainty that I’m not green or purple or something)…..

  28. 28 28 Ron

    Dave: The brown eyes don’t kill themselves. Not enough information.
    They now know that they don’t have blue eyes. Each person does
    not know whether their own eyes might or might not be green.

    And yes, for the blue eyes, counting the days is critical.

  29. 29 29 Dave

    ok I think I get it! I guess this is the point that makes teaching the most fun….thanks Ron for clarification on both points!

    Once that information has been received, each person would start on the assumption that they are brown and pick any blue eyed person to observe their behavior and determine if they are blue eyed themselves. It doesn’t matter which they choose.

  30. 30 30 Kareem Carr

    Rowan:

    There are always at least 0 blue-eyed people. Did you perhaps overlook the ‘at least’ part of the statement?

    There are actually two statements, of particular relevance to my argument, that will be present in the population.

    Let me invent some notation here. Let

    “(Everybody knows that)^2 there are at least 0 blue-eyed people.”

    mean:

    “Everybody knows that Everybody knows that there are at least 0 blue-eyed people.”

    Then, if I can see N blues, I can eventually conclude that:

    (Everybody knows that)^N there are at least 0 blue-eyed people.

    If there are N blue-eyed people in the population then browns see N blue-eyed people and blues see N-1 blue-eyed people. Thus, browns believe that:

    (Everybody knows that)^N there are at least 0 blue-eyed people.

    While, all blues (who can only see N-1 blue-eyed people) believe that:

    (Everybody knows that)^(N-1) there are at least 0 blue-eyed people.

    The stranger changes the situation so that:

    (Everybody knows that)^N there is at least 1 blue-eyed person.

    (Everybody knows that)^(N-1) there is at least 1 blue-eyed person.

    So he gives both groups new information.

    This then gives them a mechanism for counting the number of people by counting the number of days without suicide.

  31. 31 31 Snorri Godhi

    It has suddenly come to my attention that there is a good reason why the “logical” solution to the eye-color puzzle leaves people uneasy:
    why should people make the effort to work out the logical implications of a piece of information, when the only possible practical implication would be that they have to commit suicide?
    Or to put it another way: would the villagers look into a mirror, if they found one?

  32. 32 32 Cos

    I think the problem here comes from stating that the actors in this puzzle are “people” but then assuming their behavior to be completely unpeople-like. Stating that they’re devout logicians isn’t enough to override that, because we all know that people who are devout logicians would still expect their peers to act like people, and expect that if their peers are devout logicians, those peers would expect *them* to also act like people. So while the core idea of the puzzle works mathematically, it only works with non-human entities of some sort that you need to define more carefully; the math you’re using doesn’t actually apply to the story in the brain teaser.

  33. 33 33 Steve Harris

    Steve L,

    To say, “If Moe knows (1), then Moe knows (2),” is not the same as saying “Moe knows (1) is the same as Moe knowing (2)”. Thus, when I correct your

    (1) Moe knows “Larry knows there are at least two blue eyed people on the island (namely Shemp and Curly)”

    to

    (1′) Moe knows “Either Larry knows
    (a) there are at least two blue-eyes (Shemp and Curly),
    or Larry knows
    (b) there are at least three blue-eyes (Shemp, Curly, and me)”,

    I am actually making a substantive change to the content of Moe’s knowledge: He knows more than just that when Larry writes down his lower bound for blues, Larry writes down an N > 1; Moe knows that if Moe’s eyes are brown, Larry writes down N = 2, while if Moe’s eyes are blue, Larry writes down N = 3. Yes, this implies Larry’s N is at least 2; but Moe knows more than just that, he knows the circumstances under which Larry’s N is 2 and the circumstances under which Larry’s N is 3.

    However, I don’t find that it makes any substantive difference to the conclusion. As I said, I still am unsettled in this matter. The stranger doesn’t say anything other than what anyone else might have said, had the taboo on discussion not been in place, so how is this additional information? Maybe I need to digest Kareem’s points with only 3 stooges.

    Steve H.

  34. 34 34 Vishal

    Impressed by this little brain teaser and the post, I just ordered The Big Questions from Amazon!

  35. 35 35 Steve Landsburg

    Vishal: Thanks, and thanks for letting me know. I hope the rest of the book impresses you too!

  36. 36 36 Cos

    I realized that I completely neglected another fundamental flaw with this brain teaser, and again, it’s not a problem with the core puzzle the brain teaser is *trying* to convey, it’s a flaw in the attempt to link that puzzle to the story – and the fact that the story involves humans.

    Imagine that you are one of these people, and are aware of only 3 other blue eyed people. This means you might be in a population with only 3 blue eyed people out of 100, or you might be in a population with 4 blue eyed people, one of whom is you. How can you tell the difference between these two possibilities?

    For silence to convey that information, you have to assume that people learn in discrete time quanta that are *synchronous* with each other, and furthermore, that they all *know* that their time quanta of fact-learning are synchronous. This is what would allow you to figure out, after three discrete “learning” steps, that the fact that each of the 3 blue eyed people you observe hasn’t acted yet means that they each also observe 3 blue eyed people, because if they each observed only two, they would all have acted one time unit earlier.

    But if you don’t assume synchrony and discrete-ness, you have know way of knowing how quickly each person observes other people’s observations relative to how quickly you observe them, or relative to how quickly anyone else does, so you have no basis for deciding when the end condition of this mutual learning has been reached. Furthermore, you know that nobody else has any basis for knowing when this end condition should be reached, either. What if those other 3 blue eyed people haven’t killed themselves yet simply because they don’t yet know that each of the other 3 hasn’t yet decided that each of the other 3 hasn’t yet decide… and you might be brown-eyed after all. And merely having that doubt means that you know that all 3 of them should have the same doubt, so that even if you are blue-eyed it wouldn’t give them reliable knowledge.

    Otherwise, what’s to stop every single brown-eyed person from concluding that they’re blue-eyed?

    Remember, we don’t even need to assert that people *don’t* learn discrete facts one by one on a synchronized universal clock. All we need to assert is that, in the (unlikely) case that they do, they don’t *all* *know* that everyone learns the same kinds of discrete facts one by one on a synchronized universal clock.

  37. 37 37 Huck

    To return to John and Mary for a bit. The thought experiment doesn’t appear to allow for John and Mary to believe that each other are inversely intelligent. That is, if John is more inclined to believe the opposite of what Mary says and the more confidant she is, the less likely she is to be right. She may be a perfectly devout truthseeker but John believes she is exactly the opposite of intelligent, where intelligent means capable of analyzing evidence to come to the correct conclusion. With each new piece of ‘silent info’ Mary gives John he becomes more convinced of his position. Mary happens to have the exact same opinion of John. They will never agree, despite being honest and devout truthseekers. In fact their certainty will increase forever approaching but never quite reaching absolute certainty. But for all practical purposes they will after some time both be 99.999 percent sure that they are each correct and all this is regardless of exactly how certain either of them are in the original principle. That principle being, the other party is inversely intelligent. Whether John is 70 or 51% certain of that fact at the outset doesn’t matter. He will eventually become virtually positive of the opposite of all Mary’s opinions.

    It occurs to me that at some point John might want to reconsider the initial principle about Mary’s inverse intelligence and start wearing that one down by degrees in the same way. But that possibility isn’t allowed for in the original argument so I don’t see why it should be here.

    Or, what if the point at issue is not moonwalking but instead whether Mary is inversely intelligent. How can John decide the principle without using it to make his decision? I don’t know that he can and since the answer to the question about Mary’s intelligence is an assumption upon which the argument relies it really has to be the first one answered before we can figure out moonwalking.

  1. 1 In the Long Run We Are All Dead II « The Twofold Gaze
  2. 2 uberVU - social comments

Leave a Reply