In 1954, Savage wrote a lovely and highly influential little book called The Foundations of Statistics, which starts with six simple axioms about human preferences — one of which says that if you prefer a dog to a cat, then you’ll prefer an 11% chance of a dog to an 11% chance of a cat (and likewise for any other percentage). From these axioms, he drew deep and surprising conclusions about human behavior. This work underlies much of modern game theory, decision theory and economics in general.
According to legend (and I have reason to suspect this legend is actually true), Professor Savage was giving a talk one day when he was interrupted by the French econometrician (and then-future Nobel Prize winner) Maurice Allais, who asked Savage if he’d be willing to answer two questions about his own preferences. Savage said sure. These were the questions:
Question 1: Which would you rather have:
- A million dollars for certain
- A lottery ticket that gives you an 89% chance to win a million dollars, a 10% chance to win five million dollars, and a 1% chance to win nothing.
Question 2: Which would you rather have:
- A lottery ticket that gives you an 11% chance at a million dollars (and an 89% chance of nothing)
- A lottery ticket that gives you a 10% chance at five million dollars (and a 90% chance of nothing)
(Actually, the dollar amounts in Allais’s questions were exactly half of the dollar amounts I’m reporting here. Consider this an adjustment for inflation.)
Savage answered A for Question 1 and B for Question 2.
This was exactly what Allais had been hoping for — and expecting. His past experience had taught him that over half of everyone he asked gave this answer pair. (I’ve been putting this question to my classes for many years now, and my own experience is that only about a third give this answer pair. But that’s plenty enough to be disturbing.)
Allais proceeded to demonstrate that while the Savage axioms allow rational people to choose 1A and allow other rational people to choose 2B, they don’t allow any one rational person to choose both 1A and 2B. Therefore, he triumphantly announced, Savage must either abandon his axioms or confess to his own irrationality.
Savage gave the matter a little thought and said: “Oh. You’re right. I made a mistake. I’m quite sure I prefer 1A over 1B, and I’m quite sure that a person who prefers 1A to 1B will also be happier with 2A than 2B. Therefore, I choose 2A.”
Afterward, in defense of this change of heart, Savage wrote:
It seems to me that in reversing my preference between gambles 2A and 2B I have corrected an error. There is, of course, an important sense in which preferences, being entirely subjective, cannot be in error — but in a different, more subtle sense they can be. Let me illustrate by a simple example containing no reference to uncertainty. A man buying a car for $2,134.56 is tempted to order it with a radio installed, which will bring the total price to $2,228.41, feeling that the difference is trifling. But when he reflects that, if he already had the car, he certainly would not spend $93.85 for a radio for it, he realizes that he has made an error.
This leaves us, I think, with a small menu of possible conclusions from the Allais Paradox (that is, the fact that over half of Allais’s respondents, and about a third of my students, give the answers 1A and 2B).
- Maybe people don’t take surveys seriously. Actual experiments with real money might give more trustworthy results. Unfortunately, it’s difficult to find funding for experiments that involve disbursing millions of dollars (and it’s not at all clear that you’d get the same responses if you cut all the amounts by a factor of, say, a million).
- Maybe people have no stable preferences. In Savage’s day, this conclusion would have meant throwing in the intellectual towel. If preferences fluctuate randomly, then it seems there’s no hope of modeling or predicting behavior. Today, the emerging field of behavioral economics (with much input from psychology) holds out hope that preferences might fluctuate systematically in ways that can indeed be modeled. Going down this road means throwing out — or at least reworking — a lot of successful economic theories. Maybe that will eventually prove worthwhile, but it comes at a high cost. It also makes it almost impossible to choose economic policies that will make people happier, since what makes them happy at 2:00 might not be the same thing that makes them happy at 2:30.
- Maybe people value ignorance. I explained here how this might just barely account for the 1A/2B answers. On the other hand, I also explained how some simple experiments with urns and colored balls might show that the paradox survives even when the prospect of ignorance is removed from the equation.
- Maybe people sometimes make mistakes — even smart people like Jimmie Savage. This really isn’t so surprising or so troubling. Someone (I forget who) once pointed out that great mathematicians make arithmetic mistakes all the time, but we don’t conclude that something must be wrong with the foundations of arithmetic. If this is all that’s going on, it’s both bad news and good news for economists. It’s bad news because those mistakes are a part of human nature that we’re not good at predicting (though once again the behavioral economists might someday ride to the rescue). But it’s good news because it means that we can make ourselves useful by pointing out some of these mistakes and helping people make better decisions. If you’re sure that 1A will make you happier than 1B, then I’m sure that 2A will make you happier than 2B, and I can explain why.
I lean toward number four.
That’s all I have to say on this subject (though I expect there will be extensive discussion in the comments, as there has been on our other Allais-paradox posts). Stay tuned for more paradoxes of rationality over the next few months.