Leonard Jimmie Savage was a pioneer in modern decision theory and a disciple of Frank Plumpton Ramsey, whose story occupies the final chapter of The Big Questions.
In 1954, Savage wrote a lovely and highly influential little book called The Foundations of Statistics, which starts with six simple axioms about human preferences — one of which says that if you prefer a dog to a cat, then you’ll prefer an 11% chance of a dog to an 11% chance of a cat (and likewise for any other percentage). From these axioms, he drew deep and surprising conclusions about human behavior. This work underlies much of modern game theory, decision theory and economics in general.
According to legend (and I have reason to suspect this legend is actually true), Professor Savage was giving a talk one day when he was interrupted by the French econometrician (and then-future Nobel Prize winner) Maurice Allais, who asked Savage if he’d be willing to answer two questions about his own preferences. Savage said sure. These were the questions:
Question 1: Which would you rather have:
- A million dollars for certain
- A lottery ticket that gives you an 89% chance to win a million dollars, a 10% chance to win five million dollars, and a 1% chance to win nothing.
Question 2: Which would you rather have:
- A lottery ticket that gives you an 11% chance at a million dollars (and an 89% chance of nothing)
- A lottery ticket that gives you a 10% chance at five million dollars (and a 90% chance of nothing)
(Actually, the dollar amounts in Allais’s questions were exactly half of the dollar amounts I’m reporting here. Consider this an adjustment for inflation.)
Savage answered A for Question 1 and B for Question 2.
This was exactly what Allais had been hoping for — and expecting. His past experience had taught him that over half of everyone he asked gave this answer pair. (I’ve been putting this question to my classes for many years now, and my own experience is that only about a third give this answer pair. But that’s plenty enough to be disturbing.)
Allais proceeded to demonstrate that while the Savage axioms allow rational people to choose 1A and allow other rational people to choose 2B, they don’t allow any one rational person to choose both 1A and 2B. Therefore, he triumphantly announced, Savage must either abandon his axioms or confess to his own irrationality.
(You can read an account of Allais’s argument here, with followups here, here and here.)
Savage gave the matter a little thought and said: “Oh. You’re right. I made a mistake. I’m quite sure I prefer 1A over 1B, and I’m quite sure that a person who prefers 1A to 1B will also be happier with 2A than 2B. Therefore, I choose 2A.”
Afterward, in defense of this change of heart, Savage wrote:
It seems to me that in reversing my preference between gambles 2A and 2B I have corrected an error. There is, of course, an important sense in which preferences, being entirely subjective, cannot be in error — but in a different, more subtle sense they can be. Let me illustrate by a simple example containing no reference to uncertainty. A man buying a car for $2,134.56 is tempted to order it with a radio installed, which will bring the total price to $2,228.41, feeling that the difference is trifling. But when he reflects that, if he already had the car, he certainly would not spend $93.85 for a radio for it, he realizes that he has made an error.
This leaves us, I think, with a small menu of possible conclusions from the Allais Paradox (that is, the fact that over half of Allais’s respondents, and about a third of my students, give the answers 1A and 2B).
- Maybe people don’t take surveys seriously. Actual experiments with real money might give more trustworthy results. Unfortunately, it’s difficult to find funding for experiments that involve disbursing millions of dollars (and it’s not at all clear that you’d get the same responses if you cut all the amounts by a factor of, say, a million).
- Maybe people have no stable preferences. In Savage’s day, this conclusion would have meant throwing in the intellectual towel. If preferences fluctuate randomly, then it seems there’s no hope of modeling or predicting behavior. Today, the emerging field of behavioral economics (with much input from psychology) holds out hope that preferences might fluctuate systematically in ways that can indeed be modeled. Going down this road means throwing out — or at least reworking — a lot of successful economic theories. Maybe that will eventually prove worthwhile, but it comes at a high cost. It also makes it almost impossible to choose economic policies that will make people happier, since what makes them happy at 2:00 might not be the same thing that makes them happy at 2:30.
- Maybe people value ignorance. I explained here how this might just barely account for the 1A/2B answers. On the other hand, I also explained how some simple experiments with urns and colored balls might show that the paradox survives even when the prospect of ignorance is removed from the equation.
- Maybe people sometimes make mistakes — even smart people like Jimmie Savage. This really isn’t so surprising or so troubling. Someone (I forget who) once pointed out that great mathematicians make arithmetic mistakes all the time, but we don’t conclude that something must be wrong with the foundations of arithmetic. If this is all that’s going on, it’s both bad news and good news for economists. It’s bad news because those mistakes are a part of human nature that we’re not good at predicting (though once again the behavioral economists might someday ride to the rescue). But it’s good news because it means that we can make ourselves useful by pointing out some of these mistakes and helping people make better decisions. If you’re sure that 1A will make you happier than 1B, then I’m sure that 2A will make you happier than 2B, and I can explain why.
I lean toward number four.
That’s all I have to say on this subject (though I expect there will be extensive discussion in the comments, as there has been on our other Allais-paradox posts). Stay tuned for more paradoxes of rationality over the next few months.
5. People are rational but some versions of economic utility theory make the bizarre assumption that your preference for a dog is independent of whether or not you already have a dog.
Here is my question. I am not smart enough to make any comment on the underlying logic proving the incompatability of the A and B answer dichotomy. I will leave that to better minds than mine to unpack. What I wonder about is the potentially missed importance.
I picked A and B, because I was not prepared, psychologically, to know I missed a chance to profoundly alter my life for the better due to, basically as I see it, a bit of greed. I couldn’t live with myself.
However, in the second answer, I figured it was more much a fluke, and I certainly wouldn’t spend the rest of my life thinking about a miss on an 11% verse 10% chance so I rolled the dice, and have been decalred, and most likely provden to be illogical.
Fine, I can live like that. My big concern is that people who actually make policy and decisions based on irrational people like me are able to step away from the models and see what is actually happening. I hope they are….
I think 5 has a lot going for it, but with an extra caveat. People are very reluctant to acknowledge their mistake, and will stick with it even after it is pointed out. Hats off to Savage for publically acknowledging his “mistake”
If you choose A/B, I think you have the option of consoling yourself with the thought “I probably wouldn’t have won anyway” for Q2. This may be logically false, but we have seen in these dicscussions that it is feindishly difficult to really get your head around these issues, and it is very likely that the “mistake” will persisist.
The “framing effect”, I mentioned before, is where people will choose differently if the choices are presented differently, e.g. if an operation is said to have a 70% survival rate (good) or a 30% death rate (bad). This is a similar “mistake” to the one above.
The “Monty Hall” question has a similar effect. For those not familiar here it is:
The quiz master has 3 cards, 1 with a prize and 2 with nothing. You pick one (without seeing it). He then reveals one of the cards you did not choose as a zero, and offers you the chance to change your mind and pick the other card. Many people choose to stick with their original card.
People reluctant to change their choice if they have not really thought it through. I think, with the Monty Hall question it is easier to see after it has been explained, but perhaps not.
I say number 4 with a sprinkling of number 1. Number 2 doesn’t seem like an explanation for Allais-type behavior since the time interval between asking the questions is very small.
@Roger Schlafly
“some versions of economic utility theory make the bizarre assumption that your preference for a dog is independent of whether or not you already have a dog”
Economic utility theory makes no such assumption. The assumption it does make is the following: Given two mutually exclusive states of the world (in Steve’s example, these two states of the world are ‘drawing a red ball’ and ‘not drawing a red ball’), your preference for a dog in state 2 is independent of whether or not you get a dog in state 1. Now, you still might think this is a crazy assumption, but it is fundamentally different from what you wrote.
Let me try to explain why maybe this isn’t such a crazy assumption after all. Consider whether or not a red ball is drawn in question 1. If a red ball is drawn, then your preference between bets A and B don’t matter, the outcome is the same. If the ball drawn is not red, then your preferences between A and B matter. But at that point you know the ball is not red, so you know the choice between A and B is really the choice between a million dollars for sure and a 10/11 chance at five million. Hence, you have two scenarios: Either the ball is red and your preferences don’t matter, or the ball is not red and your preferences only matter between the million dollars for sure and the 10/11 chance at five million dollars.
Now take the “question 1″ in the second sentence and replace it with “question 2″. Again, the only time your preferences matter is when the ball is not red. Hence, you preference of A or B should not change between question 1 and question 2.
Maybe you think that your preference between A and B should matter when the ball is red. However I don’t see why it should.
Roger Schlafly/John:
John’s reply to Roger is exactly what I’d have written, except that John has probably said it better than I would have.
Tony Cohen: You are essentially endorsing conclusion 3.
Great story. Thanks for sharing.
I too lean toward 4. I know this is the one that often explains my own positions.
Here is some good advice someone once put in a book:
–
“Argue passionately for your beliefs; listen intently to your adversaries, and root for yourself to lose. When you lose, you’ve learned something.”
–
If I could only do this more consistently . . .
Speaking of irrational choices:
Why does anyone waste their time and vote?
A single vote can never decide the President and has never decided a Senator.
We all have far more valuable things to do than waste it waiting on line to vote.
I started writing this comment to state that #4 was the correct answer. But after thinking about it a little more, I realized that #2 is actually the real reason.
#2 looks a lot to me like Misesian/Rothbardian subjective utility theory and time-preference rearing its ugly head again. It seems that as time goes on, we just can’t shake the ideas of those crazy Austrian School economists, can we?
Theoretical observation:
There are well known generalizations of the vNM cardinal model, which don’t require linearity, the condition which gives rise to the Allais and similar paradoxes. Peter Fishburn’s Non Linear Preference and Utility Theory, pages 36 -40, set these alternatives out and their application to the Allais type paradoxes. These generalizations are missing from your list.
Practical observations:
1. If you maximize expected dollars, you won’t make the Allais error.
2. But, probably more importantly, the vNM cardinal model is not well suited to treat probability as a factor in a multi-attribute utility model. The easiest explanation for the Allais effect, and others like it, is that people are using simple rules or heuristics that trade-off the relative differences between chance and reward.
For example, you might have two rules: 1A) Always choose a sure thing unless the alternative has a very large expected return, and 2B) if two outcomes are almost as likely, then choose the one with the bigger return.
It is far from clear to me that these two rules are always irrational – which is what a proponent of vNM has to say.
1 also causes 4. When SL posted the choices I knew immediately that he had a motive and that therefore some pairs of choices were incompatible. So with some thought you can get it right. But the whole point is that your instinctive reactions can conflict. This is useful for learning how your mind works, and for showing why expressed preferences should be taken with a dose of skepticism.
I, like John, don’t see how #2 is an explanation. Why would time-varying preferences cause 1A/2B to be rational? If #2 was “Maybe people don’t have well defined preferences” I could see how that might pretty much preclude any behavior from being deemed irrational, but I don’t see how just the fact of preferences being time varying solves this problem.
Great post, very interesting story. I just have one comment and one criticism.
Comment: #2 and #4 seem very similar to me, particularly from the behavioral point of view. What is the practical difference between systematically fluctuating preferences and systematic “mistakes”?
Criticism: You say in #2 that going down the road of behavioral economics comes at a high cost. Really? If the theories are someday proven and accepted to be wrong and need to either be dumped or significantly reworked, then I’d say the value of these theories is quite small, and therefore so is the cost of discarding them. As for the ability to use these theories to improve welfare, I’d say that task is already almost impossible. Practically any proposed policy will make someone worse off, so we are always talking about trade-offs, which are subjective.
The problem is you’re asking a question about odds and expecting people to compute the expectations in their heads before answering.
What happens when you ask the questions like this
Which game would your rather play?
A. An expected $1 million payout.
B. An expected $1.39 million payout.
And two.
A. An expected $11,000 payout.
B. An expected $50,000 payout.
If you ask this question, I believe you’ll always get the same answer to both questions, and it will always be B.
By leaving out the odds of winning, however, you’re leaving out important information.
When you give the odds, you’re giving complete information, but you’re expecting people to do the math. Why not ask the question with all the math done? Then people don’t have to waste time with a calculator!
David, I don’t think people are confused as to which option has the greatest expected value. I would think that most people intuitively see that B has a greater expected value. The reason people choose A (in scenario 1) is not so much the size of the payout but the certainty of it.
Indeed when I asked this question at work, one guy worked out the exact expected values and then chose A for scenario 1 and B for scenario 2!
David Pinto:
The problem is you’re asking a question about odds and expecting people to compute the expectations
No I’m not.
While not quite apples and oranges the two questions are arguably quite different. In Q.1 the choice is between certainty and an extremely low risk. Those who chose A are likely very risk adverse. Then in question two you are asking a very risk adverse person to chose between two very high risk gambles. Given that they are in what they consider an untenable position (excessive level of risk), it would seem to entirely rational to choose the larger payoff. For a very risk adverse person the difference between a 90% and 89% chance of loss is likely irrelevant to their decision, hence the choice of B.
Expected value is a pretty counter-intuitive (and borderline invalid) way of looking at things, anyway.
Consider the following options:
A = a 1/50 chance of winning $1.00 if you pay 1 cent.
B = Being given 1 cent.
A so-called “rational person” is supposedly indifferent between these two options. But in the real world, 1 cent is such a trivial amount of money that people are completely indifferent to Option B even if there is no Option A.
On the other hand, virtually everyone will choose to go along with Option A even if there is no Option B because the cost is *EFFECTIVELY* zero, and “some chance” to win $1 is better than “no chance” to win $1.
There are all kinds of things going on here, in terms of personal, subjective utility. Rigth now, a person might not want to worry about the game and choose neither option. In 2 hours’ time, that same person may decide to play the game for reasons unrelated to the utility of the actual payoff. Tomorrow, that same person may come to a completely opposite conclusion based on a new take on their own utility. Finally, if that person were held at gunpoint and ordered to choose in no more than 3 seconds under threat of being shot dead, they might act opposite to what they would do if they think it’s just an interesting hypothetical blog post.
Some of these things can be modelled, and some can’t. One thing is for sure, you can’t fit the whole range of human preferences into an expected value model. They do indeed change all the time, depending on a variety of factors.
John as a minor quibble with conclusion 5, so I reword it.
5. People are rational but some versions of economic utility theory make a bizarre technical assumption that Steve prefers not to state, and that assumption is contrary to rational choices that people make.
I had not heard this story about my father before, and believe your readers might be interested in the following quote about his view of rationality from my book on The Flaw of Averages:
=========== From The Flaw of Averages p 111
He put it this way on page 16 of The Foundations of Statistics:
“The point of view under discussion may be symbolized by the proverb “ Look before you leap, ” and the one to which it is opposed by the proverb, “ You can cross that bridge when you come to it.”
The theory is based on the principle that people will correctly assess the uncertainties facing them (look) and make rational decisions (leaps) so as to maximize their expected gain. But my father understood that life was so complicated that this could not easily be adhered to in practice. A few sentences later he writes:
“It is even utterly beyond our power to plan a picnic or to play a game of chess according to this principle.”
But he believed it was at least a good starting place for a theory of decision making. In point of fact, he has been proven wrong in
the particular case of chess; today, computer programs are the reigning champions. So score one for rationality. But when it comes to decision making under uncertainty by humans using their bare brains, experiments show that even sophisticated people often behave irrationally. For example, when subjects are presented with a hypothetical medical decision between one procedure that has a 90 percent chance of saving a patient’s life and another procedure
that has a 10 percent chance of letting the patient die, they generally prefer the first, even though they are mathematically equivalent.
I myself don’t believe that all decisions can or even should be made rationally. Creating art, for example, requires at least tacit decision making, yet art that springs from rationality instead of emotion is contrived. My position is that decisions are made using the seat of the intellect at one extreme and the seat of the pants at the other and that the best decisions are those upon which both extremities agree.
Ron:
In Q.1 the choice is between certainty and an extremely low risk. Those who chose A are likely very risk adverse. Then in question two you are asking a very risk adverse person to chose between two very high risk gambles.
In other words, you’re completely missing the point.
Roger Schlafly:
Savage’s axioms are more abstract than this, but all that’s needed for this discussion is the vonNeumann/Morgenstern axioms:
1) Given two lotteries L and M, either you prefer L to M or you prefer M to L or you are indifferent between L and M
2) If you prefer N to M and M to L, then you prefer N to L
3) If A is a lottery that returns L with probability p and X with probability 1-p, and if B is a lottery that returns M with probability p and X with probability 1-p, and if you prefer L to M, then you’ll prefer A to B.
There’s also a continuity axiom which isn’t needed for the present discussion.
Which of those axioms do you find bizarre and technical?
I meant 4 in my earlier post.
I am one who selects 1A and 2B some 25 years after first learning of the Allais paradox. Heck, I made those chooses when this was presented on 12Oct knowing that SL was testing us for the paradox. Maybe I value ignorance.
Yet, consider the following changes. I am a hypothetical medical intern. The outcomes are: $0 is patient death, $1mil is survival with “problems” (e.g., removal of limb), and $5 mill is complete recovery. I imagine the intern selecting 1A and 2B. But, of course, there is a simple explanation: the intern does value patient death the same in both lotteries. If the patient dies in 1, it is known the intern could have saved him. Not so in 2. Of course, that suggests that the utility to the intern of patient death does depend on the whether the lottery is 1 or 2. That makes sense, as a patient death in 1 might cause termination or a lawsuit, while patient death in 2 might not cause termination or a lawsuit. (This assumes, of course, the selecting 2B doesn’t cause problems for the intern when the patient doesn’t die, as the complete recover reveals the intern’s embracing the increased chance of death. This may not hold for termination, but works well for lawsuits.) I don’t think the intern meets 1, 2, 3, or 4 above.
In real life people never come to you with offers that disclose accurate probabilities. Things that are “a sure thing” or “almost guaranteed” are very different from propositions that carry no risk due to a legally enforcable claim against an entity backed by a well capitalised insurer.
When some-one tells you “this is 90% guaranteed” you derate that into a private estimate that it is 50:50 or 60:40. Even a 99% guarantee gets derated to 90%. If you think the other person boosted his declared confidence from 90% to 99% because he knew that you would be skeptical and knock his 90% down to 60%, you are going to knock his 99% down to 60%.
Option 1B involves a stated 1% chance of losing out altogether. In human terms that sets of alarm bells and raises red flags. One cannot help responding to 1B as some kind of trap in which the stated 1% will inevitably turn out to be a much larger probability.
Steve-
Tony Cohen in #2 had it exactly right. It’s about the assured
payoff of $1 million.
It has nothing to do with ignorance. Lumping it into explanation
4 is like some evolution-denier coming up with an “answer” to
every objection. Sure, they can *say* it’s due to reason N, but
that doesn’t make it so. Science is about trying to find the
actual truth, not smugly stuffing a fact into some category that
one has decided in advance that it must fit into.
If Economics has not addressed the actual reason beyond those
four categories for the 1A, 2B choice, then this is, at minimum,
a paper waiting to be written and published.
Okay, I’ll make one more attempt to substitute for the two questions.
Question 1: You will die within a week without an operation.
Which would you choose:
A. An operation that will certainly cure you.
B. An operation that will certainly cure you, plus
has a 10% chance to add four years to your life, but has a 1% chance of
killing you.
Question 2: You will die within a week without an operation. Which would you choose:
A. An operation that cure you, but you have an 89%
chance of dying on the operating table from.
A. An operation that not only cure you but will add
four years to your life, but you have an 90% chance of dying on the
operating table.
I suppose I should make my argument on the above explicit. There is
a qualitative difference in the choice between certainty and
uncertainty as compared to choosing between a low-probability
outcome vs. another slightly lower probability outcome.
David: I think the medical intern is acting this way due to ignorance (#3). Suppose that your scenario is revised to remove the ignorance factor: because the patient is allergic to standard types of anesthesia, a nonstandard type of anesthesia must be used. Assuming the anesthesia works on the patient (which happens 11% of the time), the doctor can choose between procedure A and procedure B, where procedure A cures the patient “with problems” 100% of the time, and procedure B leads to full recovery with 10/11 probability, and death with 1/11 probability.
Now consider 2 cases: In case 1, if this peculiar type of anesthesia does not work (89% probability), it instead actually allows the patient to recover “with problems” (due to its side effects). In case 2, if the anesthesia does not work, it kills the patient.
Now it is clear that in case 2, if the patient dies because the doctor chooses procedure B, “it is known the intern could have saved him,” just as in case 1. Thus a lawsuit might be in order, just as in case 1.
Ron: The fact that you can formulate two new questions to which one can consistently answer A/B does not change the fact that you can’t consistently answer A/B to the originals.
I think you’re wrong. The four options you give leave out what to me is the most obvious explanation, which I’ve already left in the form of comments on a couple of your earlier posts on this subject.
5. The theory is wrong. A perfectly rational person can prefer 1A and 2B without being mistaken in the sense you describe, without fluctuating preference, and without valuing ignorance.
I am with Cos. Axiom 3 is the one that appears to be contrary to the preferences of many people.
i was going to join this discussion earlier, but being a b/a makes me feel waaay out of the loop.
its interesting that most people seem to find your four (a fitfth has been postulated) reasons mutually exclusive, including yourself when you say ‘i lean towards #4′
i think all four are great reasons for irrationality. in fact i would remove the word maybe.
people don’t take surveys seriously.
people have no stable preferences.
i noticed that you included a time variable, but there are as many variables as there are people. the same traffic cop that will pull you over for going a fuzz over the speed limit in a red sports car will let a service truck zoom past his radar.
people value ignorance.
people make mistakes.
mistakes are interesting as well. in the nuclear navy, we had a ‘zero defects’ policy. i believe adm. mckee started it. the claim was that peoples behavior was determined by the importance they place on the process. after all, no one pulls into the wrong driveway when they are coming home from work. the thing is…someone is bound to pull into the wrong driveway while coming home from work.
ive also been wondering about the cash incentive. money cant buy you love. it is most certainly not among our basic needs. it seems like a useful tool for reducing trade barriers, but thats about it.
#6 maybe people don’t really care about cash incentives.
Ron, Cos, and others of like mind:
If you care to continue this discussion, I think it would be very helpful to pin down the locus of what is either our disagreement or your confusion. To that end, it would be very helpful if you can answer a couple of questions:
1) My rational friend Frieda prefers cats to dogs. Do you agree that her rationality requires her to prefer A) an 11% chance of a cat to B) an 11% chance of a dog?
2) My rational Frieda prefers cats to dogs. She is given a choice between A) an 89% chance at a million dollars and an 11% chance of a cat; B) an 89% chance of a million dollars and an 11% chance of a dog. Do you agree that her rationality requires her to prefer A to B?
3) I have here a Blue Lottery Ticket, which has a 10/11 chance of returning five million dollars and 1/11 chance of returning nothing. My rational friend Bob prefers a sure million dollars to a Blue Lottery Ticket. Do you agree that his rationality requires him to prefer A) an 11% chance of a sure million to B) an 11% chance of Blue Lottery Ticket?
4) My rational friend Bob prefers a sure million to a Blue Lottery Ticket. I am planning to draw a ball from an urn. If the ball comes up red, Bob gets a Mystery Prize. If it comes up black, he gets his choice between a sure million and a Blue Lottery Ticket. Do you agree that his rationality requires him to choose the sure million? And do you agree that this answer is independent of the value of the Mystery Prize?
When I say “Do you agree that her rationality requires….?”, what I mean is “Do you agree that a reasonable definition of rationality would entail this?”
If I knew your answers to these questions, I think I could either A) respond much more helpfully to your confusion (assuming you are, as I believe, confused) or B) start to understand why you are not confused after all.
Ron: I think I spoke too soon in my response to your two new questions. I misunderstood them and thought they were not parallel to the Allais questions. I now see that they are in fact parallel, so of course the same arguments apply.
Which do you prefer: A 10/11 chance of adding 4 years to your life, together with a 1/11 chance of dying? Or to maintain the status quo. If the former, you should answer B to both questions. If the latter, you should answer A to both. Because 89% of the time your choice won’t matter anyway, and the other 11% of the time, you are going to get either A) a 10/11 chance of adding 4 years to your life together with a 1/11 chance of dying or B) the status quo. You should pick the one you prefer. Both times.
Steve, when you argue that Frieda cannot be rational, you are saying that there can be no good reason to support her choices. That seems unlikely to me. Suppose as you say, that Frieda prefers cats to dogs. But suppose that public demand for dogs is greater, so that she might get a dog, sell it, buy a cat, and have money left over. So for the purpose of this analysis, is Frieda allowed to trade her assets on the market?
Roger, you are, I think, equivocating in your use of the word “prefer”. She prefers to own a cat, but would prefer to be given a dog because she can trade it for a cat and some money. But does that really affect the answer? If she would prefer to be given a dog because she could trade it for a cat and some money, then surely she would prefer the lottery which gave her an 11% chance of a dog to one which gave her an 11% chance of a cat.
Does anyone deny that all 4 of the explanations for the Allais paradox represent human behavior in different circumstances? The fields of marketing, psychology, neuroscience, evolutionary biology, and of course behavioral economics all deal with variations of these themes.
The assumption of human rationality can be a helpful simplifying assumption, but it’s utility in any particular model or prediction requires serious scrutiny.
I have finally found a form of words that I find convincing on a “basic understanding” level. If I put it this way, it feels right to me. Apologies to all who thought this obvious.
There are 2 sacks labelled A and B. I can chose which I draw from
Q1 A) 11 balls, all $1m. B) 10 balls of $5M and 1 zero.
Q2 A) 89 balls $0, 11 $1M B) 90 balls $0, 10 balls $5M.
These are the usual questions.
Say I pick 1A/2B.
You say that I only have about a 10% a chance of getting to make the draw. Do I stick to the same answer? Now, some people have argued that this may change their choice, but for me the answer has to stay the same. So for Q1 it is still A.
The way it will be decided whether I get to make the draw is by drawing balls from a “do I get to draw” sack. 89 red balls with “No Draw” and 11 balls with “Draw” are placed in another bag. If I pick a “Draw” ball, I then get to take a ball from the Q1 sack.
Hang on, I say, there are 11 balls in Q1 sack, and 11 “Draw” balls in the “do I get to draw” sack. Why don’t I save time, and put the 11 balls from the draw sack into the “do I get to draw” sack? I can clearly see the outcome is the same as having 2 sacks. So my prefered option is 89 “no draw” balls and 11 $1M. I prefer this to 89 “no draw”, 10 $5M and 1 zero.
Excellent idea, you say. Now. Lets have a look at Q2. The two sacks contain either 89 Nothing balls and 11 $1M, or 90 nothing and 10 $5M. You prefer B. But in fact, this is exactly the same draw as Q1, with zero and “no draw” the same value. How come you prefer B in this question? Ah! I say, I made a mistake. When you put it this way, I can really see why the two questions are the same. Actually, I prefer A/A.
By re-labelling the 89 balls “no draw” instead of zero”, It has allowed me to really grasp this in a way that I find totally acceptable. I am therefore prepared to swap fropm B to A.
However, before figuring this out, even after working it through several times, and intellectually accepting that they were the same, Q2B still felt more attractive than Q1B.
Leonard Savage was able to “see” and understand the mistake after a quick look. It has taken me a few days. As well as intellectually grasping it, it now actually makes sense!
The moral is, even if you explain until you are blue in the face, these “mistakes” are going to persist. In many ways, a persistent mistake is the same as option 2, inconsistent preferences.
Question #3 is where we differ.
“Do you agree that his rationality requires him to prefer A) an 11%
chance of a sure million to B) an 11% chance of Blue Lottery
Ticket?”
By definition, 11% chance means that it’s not a sure million. It’s
simply a million. Otherwise, the first half of the question would
need to be restated:
I have here a Blue Lottery Ticket, which has a 10/11 chance of
returning a sure five million dollars and 1/11 chance of returning
nothing.
In both cases, once chance comes into play, “sure” is not an
appropriate descriptor. That’s not a sure 5 million; it’s a
luck-dependent 5 million. The honest use of the word “sure” means
that the expected value is equal to the prize amount. The injection
of the element of chance eliminates certainty.
Is someone who prefers soup when it is cold and salad when it is hot considered to have an unstable preference? If not, how is this diferent from someone who has different preferences at 2:00 and 2:30?
Thomas Purzycki: I would guess not. I think that means not enough information. Someone who sometimes prefers soup when its cold and sometimes when its hot has unstable preferences.
Ron:
Okay. So here I have an envelope containing $1 million and a different envelope containing a Blue Lottery Ticket.
I ask my friend Bob: I’m going to give you an envelope. Which envelope would you rather have? He says “the first. definitely.”
Then I change my mind and say: Actually, there’s only an 11% chance I’m going to give you an envelope. *If I do give you one*, which would you rather have? He says “the second, definitely”.
Just to be sure — this is the behavior that you’re happy to call rational?
What is everyone missing?
Let:
A = 1,000,000
B = .89*(1,000,000) + .10*(5,000,000)
C = .11*(1,000,000)
D = .10*(5,000,000)
These are just Steve’s “lotteries” in expected value.
Suppose you prefer A to B (write “A pr. B”, where “pr.” means “preferred to”)
A pr B implies 1,000,000 pr .89*(1,000,000) + .10*(5,000,000)
which implies that
.11(1,000,000) pr. .10(5,000,000) [just subtracting .89*(1,000,000) from each side]
But this is just saying that C pr. D, right? Of course, everything would be the same if B pr. A.
If your girlfriend asks you whether you’d prefer to go to a baseball game or a football game on your birthday, which is in a month, does your answer depend on the probability of a breakup before your birthday?
I think it is clearly that people’s risk/reward preferences change based on the expected pay off. My guess is that all the people who chose ‘irrationally’ chose 1a and 2b, indicating that as the pay off increases, people are willing to tolerate a higher amount of risk. I am not familiar enough with Professor Savages work to say if this is rational or not, but it seems to be what is happening.
Mike: The questions ask which lottery you prefer, not which expected value you prefer.
Jonathan Campbell:
If your girlfriend asks you whether you’d prefer to go to a baseball game or a football game on your birthday, which is in a month, does your answer depend on the probability of a breakup before your birthday?
Maybe, actually. Suppose that a) I prefer football. b) If I choose football, I will begin looking forward to it. c) Once I’ve started looking forward to it, the pain of not going becomes greater. Whereas d) with baseball, I tend not to look forward to it so much.
I can imagine, in these circumstances, that I’d choose football if I’m sure we’ll be together and baseball if I think there’s a good chance we’ll break up.
But I don’t think this applies to the Allais questions, where the payoffs are (presumably) made immediately after you make your choices, so we don’t have to worry about what happens during significant periods of anticipation.
Steve-
Yes, I call the two choices about the envelopes rational. However, I
don’t do it happily.
Here’s the logic: a guaranteed certain high payoff can be worth more
than a contingent higher payoff. This bit of wisdom was codified by Ben
Franklin. “A bird in the hand is worth two in the bush.” A speculative
payoff puts the million dollars into the figurative bush. The B answer
to the second set of conditions is where expected value becomes more
important.
If you want to see the responders to the original problem lose most of
their apparently illogical choices, introduce contingency to all
choices. e.g.:
A. A 99% chance of a million dollars.
B. A lottery ticket that gives you an 88% chance to win a million
dollars, a 10% chance to win five million dollars, and a 2% chance to
win nothing.
Steve: Fine, let’s say the game is starting in 1 minute, and your girlfriend has a “beam me up Scotty” machine that will allow you to arrive on time, but that your relationship is extremely volatile.
But actually, I’m not sure you need any significant anticipation time for your objection to apply. Suppose your friend Bob prefers $1M to the Blue Lottery Ticket (BLT) but that, due to his psychological makeup, once he has committed to a choosing $1M, he will be extremely disappointed if he does not get it. Thus he prefers an 11% chance of a BLT to an 11% chance of $1 million. Why is this any more irrational than the guy in the football/baseball case? Once we allow that the very act of making a decision affects a person’s mood in such a way that his future utility assignments change, I think even the 1A/2B decision in the original question must not be considered irrational.
Ron:
Yes, I call the two choices about the envelopes rational. However, I
don’t do it happily.
In that case, I think we have identified exactly where we part ways.
Once again: I offer you two envelopes and ask which you prefer. You say the first. I then backtrack and say that now there’s only an 11% chance I’ll give you an envelope and ask which you prefer to get *if* I give you one. You say the second.
This is precisely the behavior that you’re willing to call rational and I’m not. From the fact of your reluctance, I infer that you at least understand why I (and most economists) are uncomfortable calling this rational. So I now believe we’ve effectively communicated.
Jonathan Campbell:
Once we allow that the very act of making a decision affects a person’s mood in such a way that his future utility assignments change, I think even the 1A/2B decision in the original question must not be considered irrational.
Yes, I think this is the case actually. Though I am much less willing to countenance mood changes that take place over short intervals than mood changes that take a while to build up. Still, I suspect maybe this means I should have had a fifth item on my list of possible conclusions.
Jonathan Campbell:
I wrote (in response to you):
Though I am much less willing to countenance mood changes that take place over short intervals than mood changes that take a while to build up.
Let me elaborate.
If I know today that I’m going to the ball game in a month, then I can start making appropriate lifestyle adjustments today — buying the special cap that is only appropriate for baseball games, brushing up on my scorekeeping skills, etc. Over shorter periods there are fewer such options available.
Jonathan Campbell: I was attempting to introduce the simple notion that the requirement of AA or BB choices rests on the 3 outcomes having the same utility in each lottery. My intern acted differently because the utility of the zero-equivalent outcome (patient death) was different under the two lotteries.
Imagine the two lotteries presented by SL above. My preference would be BB. However, if I selected 1B and lost my wife would leave me for “taking such a silly gamble.” If I lost in lottery 2, my wife would never know I chose 2B, and I could remain happily married (pretending I chose 2A if I lost lottery 2). Thus, I choose AB because the zero outcomes have a different utility in each lottery.
I wonder if the results would be more consistent if you asked how much people would pay for each option rather than simply which one they’d prefer.
Steve: Broadly, I agree, although I think (particularly when we are dealing with sums of money this large) that moods can change very fast, and psychological attachments can play an important role. One way to get around this in the original problem is to tell the subject that he will be given a pill which induces retrograde amnesia between making the decision and receiving the payoff. Or perhaps tell the subject that the payoff will go to his mother, or a good friend, rather than himself, so that he will not have to worry about psychological attachments.
I do agree with your issues with my baseball/football example. I just think it is helpful to use examples that cause a person to viscerally conceive of the potential irrationality of a mixed response, and I think that examples that deal only in sums of money and probabilities may be too abstract for this.
A simpler explanation might be to consider assumed uncertainty in the probability estimates: in 1A you’re told it’s a certain million dollars, while 1B adds an “11%” chance of playing a game that has “10:1″ odds of 5M or nothing.
Compared against that, option 2 gives you an 90%/89% chance of nothing versus either a 100% chance of $1M or a 100% chance of 5M.
In the event that the probabilities as stated were approximations and might vary independently, I think it’s fair to suspect that the “90%” versus “89%” odds might be statistically indistinguishable, and then just look at the payoffs, which leads to choosing 2B, even if you were risk averse and chose 1A previously.
OTOH, if the game is structured so that there’s an initial 89% chance of nothing, followed by either a 100% chance of $1M or a 10:1 chance of $5M then the 89%/90% odds don’t vary independently, so having chosen 1A would imply choosing 2A as well.
Jonathan Campbell:
I just think it is helpful to use examples that cause a person to viscerally conceive of the potential irrationality of a mixed response
I agree, and despite my nitpick, I actually think your baseball/football example does an excellent job of this.
David Wallin: So in that case it seems that the key is your preference for your wife’s ignorance, rather than your own.
Let me propose a football parallel.
Coach X is down by 3 points with :01 left in the game.
1. Would he prefer:
A) a 100% chance of a field goal (tying the game)
OR
B) a 40% chance of a touchdown (winning the game)
2. Would he prefer:
A) a 40% (100*0.4) chance of a field goal
OR
B) a 16% (40*0.4) chance of a touchdown
Coach X chooses 1A because a guaranteed tie is better than a likely loss; he chooses 2B because, since he is likely to lose either way, he wants to take a chance at a win.
Is this irrational? Is this scenario different in some meaningful way than the money or dogs/cats scenarios already discussed?
Todd, that certainly seems irrational. In fact, either it seems more obviously irrational than the original example, or I have been thinking about this problem for so many days now that the whole issue has clarified in my mind.
Look at it like this: In scenario A, if the tie is so valuable that it’s not worth giving up 60% chance of it to go for the even more valuable win, then the 40% chance of a tie must be so valuable that it’s not worth giving up 60% of that 40% chance to go for the win.
Try putting it into various real-world situations. e.g. this is in a knockout competition and if the game is a tie then extra time is played. First we’ll assume you have a 50% chance of winning the game in extra time. Here it is right to go for the tie in each case. In case 1 that gives you a 50% chance of winning the game as against 40% if you try for the touchdown. Whereas in case 2, going for the field goal gives you a 20% chance of the win against 16% if you try for the touchdown. However, if the team is exhausted and you reckon your chance of winning in extra time is only 30%, then you’ll go for the touchdown in both scenarios (30% v 40% in scenario 1; 12% v 16% in scenario 2).
Or imagine this is a league game where you get 2 points for a win and 1 for a tie and 0 for a loss. What possible position can we be in that we would gamble for the 2 points in scenario 2, but we wouldn’t gamble for them in scenario 1? I certainly can’t think of one.
Todd: It seems to me that your example is perfectly parallel to the cat/dog example, and that the coach is irrational.
Anthony Towns: This is indeed an interesting new angle on the problem.
OK. Here’s a crack at it…
Your analogy of prefering cats to dogs does not hold in this particular case. The issue is that you have included probability into both the problem statement and the “cat” or “dog”. The result is that you have introduced a new variable into the decision in which a rational person can base their judgement — the sure thing.
Let’s say I have two extreme values. On the one hand I value the sure thing — just make sure I score that million. On the other hand, when forced to gamble, I prefer the highest expected value. These two preferences lead to 1A and 2B — rationally.
This is a true story. Really.
I was driving today for about 9 hours with my son who is a freshman in college. Given so much time to chat, I asked him which he would prefer: one million dollars; or an 80% chance to win five million dollars. He went for the million, and said it wasn’t worth the risk of a sure thing to go for more.
Then I asked if he would prefer a 20% chance to win one million dollars; or a 16% chance to win five million dollars. He went for the chance at five million dollars, and said he was likely to loose in either case, so why not go for the big bucks?
I then asked him to imagine entering a room with 100 people from which 20 were selected at random and offered the first choice. “If you were selected,” I asked, “which would you choose.” He said “I’d take the the sure million, just like I told you before.” I asked if I entered the room on his behalf, would he instruct me to go for the sure million in the event I was asked. He said “of course.”
I said, “Okay, you prefer a 20% chance to win one million dollars to a 16% chance to win five million dollars, right?” He said, “No, I told you I want the sure million dollars.” I said, “What are the chances I’ll be selected to make a choice on your behalf?” He said “Twenty percent.” I said, “Then what are the chances I will win one million dollars on your behalf?” He said “Twenty percent.” Then a light came on.
He still thinks he should prefer a 16% chance to win five million dollars over a 10% chance to win a million dollars, but it is clear to him that that is inconsistent with his preference for a sure million over an 80% chance for five million. He doesn’t know why these preferences should be inconsistent, but he sees that they are. And we wants to learn more about this topic. Unfortunately, he is not a student at the University of Rochester.
Not once did either of us use the word ‘rational’.
Thomas Bayes: Re your dialogue with your son — what a great way to explain this paradox! I might steal it.
Hmmm… having had a night to think about my post, I would have to say that it’s not correct — really both questions, eventually, offer “sure things”. The questions are phrased in such a way as to lead folks to a different paradigm for evaluating the options when presented in question #2. I think that the issue is #4 on the “Conclusions about Allais’s Paradox”.
For myself, I chose 1B and 2B. I just thought I could understand the “logic” (thinking?) used to go 1A and 2B.
I have been intermittently following this discussion for the last week and would like to make four comments.
Comment 1:
My choices are 1B & 2B.
Comment 2:
I particularly enjoyed Prof. Landsburg’s envelope example. I found it more vibrant than the cat & dog stuff, and even the original question.
Comment 3:
I’m having trouble reconciling two of Prof. Landsburg’s statements.
Statement 1 (from last weekend): “Claim Four: The desirability of a lottery depends only on the prizes and the probabilities of winning.”
Statement 2 (from this post, in response to a reader comment): “The questions ask which lottery you prefer, not which expected value you prefer.”
Doesn’t “only” (in statement 1) imply equality? E.g. a) the size of a cube depends only on it’s height, width, & depth; therefore b) for a cube, size = height, width, & depth (expressed as height x width x depth). If so, can’t we say: a) The desirability of a lottery depends only on the prizes and the probabilities of winning; therefore b) for a lottery, desirability = prizes and the probabilities of winning (expressed as prizes x probabilities of winning)?
Is there a difference between desirability and preference? Is there a difference between “prizes and the probabilities of winning” and “expected value”? If there isn’t, don’t those two statements conflict? I need help here?
Comment 4:
A person has a number of priorities when it comes to making decisions. I’m going to list three of those priorities that may or may not be applicable to the question under discussion. Priorities, which may be higher or lower than those listed, that are not applicable will not be listed.
1) In a range of outcomes, which the amounts at hand fall into, Person always prefers a certainty to a probability, no matter how “good” that probability is.
2) In a range of outcomes, which the amounts at hand fall into, and certainty is not an option, and Person is required to risk something, Person will only prefer a probability that a) to Person’s best reckoning, has a greater that 50% chance of success, and b) a positive expectation to Person. Person will not accept a probability that a) to Person’s best reckoning, has less than a 50% chance of success, or b) has a negative expectation to Person, no matter it’s chance of success.
3) In a range of outcomes, which the amounts at hand fall into, and certainty is not an option, and Person is not required to risk something, Person always prefers the probability that has the highest expected value.
This leads…
In regard to Question 1, Priority 1 is sufficient to cause Person to choose answer 1A. Priorities 2 & 3 do not apply; Priority 1 does.
In regard to Question 2, Priorities 1 & 2 do not apply: in Person’s best judgement, certainty is not an option, and a risk on Person’s part is not required. Therefore, Priority 3 applies, and is sufficient to cause Person to answer 2B.
Now, in so far as I understand Prof. Landsburg, given that Person answers 1A & 2B, Prof. Landsburg must, he is requred to, regard Person, in this instance, as being irrational.
Which brings me to ask three questions:
1) Am I right in thinking that Prof. Landsburg must regard Person as irrational in this instance?
2) If I am right, is Prof. Landsburg right in regarding Person as irrational in this instance?
3) If Prof. Landsburg is right, what, in Person’s priorities, which lead directly to Person’s answers, is irrational?
I like Thomas Bayes example too. It made me realize a couple things.
One – The irrationality seems to derive from our brain not being able to get a concrete sense of the real difference between a 16% and 20% chance of losing. Those seem approximately the same. When I read the chance of losing in this example and in the original, I’ve had plenty of experience at losing that my brain referenced some of those examples and I could well imagine losing in both cases, so like Bayes’ son, why not go for the bigger amount?
Two – Blogs are a great way to help figure out where you might recommend your kids to attend college and what professors they should seek out.
I’ve written some Common Lisp http://paste.lisp.org/display/115804 to let me experience playing a lottery with an honest 1% provided by my computer’s random number generator. Initially I chose 1a but having played 1b on the computer many times I feel comfortable choosing 1b.
I think the Allais Paradox results from people choosing 1a over 1b even though 1b is a much better deal than 1a, and I think this is the other side of the coin to overconfidence http://lesswrong.com/lw/jg/planning_fallacy/
People are used to being talked into doing things on the basis that there is only a 1% chance of it going wrong, and then it goes wrong. Indeed, as I sit here typing I look out over a construction site for the Edinburgh tram project, which is going to be trimmed because it has gone way over budget. What were the odds of that happening? Certainly much higher than the proponents (let on)/knew/(admitted to themselves).
@70 Alan Crowe-
Yes, playing it many times shows you what will happen in the long
run.
Just drop the prizes by a factor of 10 and let me play ten times,
and I’ll choose option B every time. A sufficient number of trials
gives the laws of chance time to work and bring you closer to the
Expected Value.
The thing is, the problem is worded such that you play only once
ever. If you choose B, the odds are strongly in your favor of
receiving $1 million, and you may even receive $5 million. But that
zero return is still an outside chance.
The expected value of option A is $1 million, and the expected value
of B is $1.39 million. Since option A gives you the unconditional
option to walk away with a million, the question you have to ask is
whether the $390K increase in expected value is worth the chance of
walking away with nothing. Aside from noting that choice B is worth
more, you also need to consider whether this is a game you can
afford to lose.
One demonstration of the irrationality of 1A/2B is how such people attempt to rationalise their choice in scenario 2. The standard rationalisation seems to be along the lines of “since I am likely to lose in either case I will choose the option with the greatest payout”. But does that even make any sense?
Here’s another set of options (assume you prefer a steak to a cheese sandwich!):
Scenario 1: You are very hungry. Would you rather have
A) 100% of a cheese sandwich or
B) an 89% chance of a cheese sandwich, a 10% chance of a steak, and 1% chance of nothing.
Scenario 2: You are very hungry. Would you rather have
A) 11% chance of a cheese sandwich and 89% chance of nothing or
B) 10% chance of a steak and a 90% chance of nothing
Assuming that you don’t want to take the risk in scenario 1, does the rationalisation “Since I am likely to remain hungry in either case, I might as well go for the steak” really make sense in scenario 2?
Expected value of decisions (in millions)…
1a.) $1
2a.) $1.39
2a.) $.11
2b.) $.5
My question here is: Didn’t the definition of “cat” change? We kept the value differential the same, granted. However, dog 1b is 11% of Dog 1a. Cat 2b is not 11% of Cat 2a.
And…
Is there anything rational about a comparison of the relative value between two utility statements? Yes, both questions have $.39 as their differential between A and B. However, choice B in #2 has 455% more utility than A. In question #1 the difference is only 39%.
Scott H:
Didn’t the definition of “cat” change?
No. “Dog” is consistently one million dollars. “Cat” is consistently a lottery ticket that delivers five million dollars with probability 10/11.
Jonah Lehrer has some interesting comments on the Allais paradox: http://www.wired.com/wiredscience/2010/10/the-allais-paradox/
Anthony Towns:
I think I understand what you are saying, but the reason I can’t fully accept this as as a defense of 1A/2B is that it seems to me that we should be indifferent between a lottery with 11% probability of receiving $1M, where that percentage is totally certain, and a lottery with a supposed 11% probability of receiving $1M, where that percentage is a statistical best estimate, and subject to uncertainty. (or at least it is not clear how we can quantify our preference for one over the other).
For example, let’s say the probabilities of payoffs are not provided upfront (and you have no a priori basis for a guess), but rather you are told that the 2A lottery has been played 100 times by 100 different people, and 11 have won $1M. How much more/less valuable would this lottery be than a lottery where you are told the percent chance of winning is exactly 11%?
Let’s use a very simple model of uncertainty and say that for 2A, the winning probability P_A has a 50% chance of being 10%, and a 50% chance of being 12%. Similarly for 2B, P_B has a 50% chance of being 9% and a 50% chance of being 11%. In both cases our estimate of P is the average over the bimodal probability distribution. If P_A and P_B vary independently, the in reality, there is a 25% chance associated with each of the following:
P_A P_B
10% 9%
10% 11%
12% 9%
12% 11%
It seems that if we look at each of these cases independently, our preference for a sure million over 10/1 odds of receiving $5 million does not provide us with sufficient info to compare the 2 lotteries. That, in my interpretation, is the case you are making. On the other hand, I don’t know how to reconcile this with my belief that we should be indifferent between a case where P_A is stochastic as described, and where P_A is fixed at 89%, which leads me to not understand exactly how the statistical uncertainty should affect our preference.
I suspect our discussion is about to wind down on this topic. It’s been fun, though.
Here is how I will remember this discussion:
There are many people who prefer 1.A when given this choice:
1.A (0%, 100%, 0%) chance for ($0, $1M, $5M)
1.B (10%, 0%, 90%) chance for ($0, $1M, $5M)
If we give these same people this choice:
2.A (90%, 10%) chance for ($1M, lottery ticket 1.A)
2.B (90%, 10%) chance for ($1M, lottery ticket 1.B)
they will stick with the option that includes ticket 1.A. That seems reasonable to me.
But if they are given this choice:
3.A (90%, 10%) chance for ($0, lottery ticket 1.A)
3.B (90%, 10%) chance for ($0, lottery ticket 1.B)
they will now prefer the option that includes ticket 1.B. That seems unreasonable to me, because they would trade ticket 1.B for ticket 1.A if given the chance. So why not just select 3.A to begin with?
If it seems like I’m misrepresenting the preferences people have expressed, remember that these three choices can be rewritten as:
1.A (0%, 100%, 0%) chance for ($0, $1M, $5M)
1.B (10%, 0%, 90%) chance for ($0, $1M, $5M)
2.A (0%, 100%, 0%) chance for ($0, $1M, $5M)
2.B (1%, 90%, 9%) chance for ($0, $1M, $5M)
3.A (90%, 10%, 0%) chance for ($0, $1M, $5M)
3.B (91%, 0%, 9%) chance for ($0, $1M, $5M)
What has impressed me most about this discussion is that, even after this inconsistency is pointed out, many people still believe it is rational to select 1.A, 2.A, and 3.B. I guess this explains why the people who run casinos and lotteries can be so successful. Find a clever way to frame a choice, and you can cause people to do things they wouldn’t do otherwise. And even if someone points out your framing trick, the players will still go with their instincts. Fascinating.
One of the commenters, Todd, made a reference to the tendency of football coaches to make irrational decisions. I once read a careful analysis by some statisticians at Stanford in which they argued that coaches should go for the touchdown on 4th-and-goal much more than they do. Coaches, however, typically go for the ‘sure’ field goal. When a coach was asked what they thought about the professors’ suggestion that coaches should go for the touchdown more often, they replied with something like “That’s why they are professors at Stanford and I am a professional football coach.” I love that response.
Is there anything to the possibility that the ratio of expected values is causing a shift in risk tolerance? Trading down not be worth an increase in expected value of 40%, but lowering the winning odds for a 5x increase in expected value is clearly a better choice for most people.
If you’re not doing the math, the second question presents a clearly better option for anyone but the most risk averse, the first isn’t as clear.
In full disclosure, I answered B/B but only after stopping to think about the expected value of the first question (400,000 was easily worth a 1% chance of nothing).
It seems like real world tests of this could be easily tested by games created by the state lottery departments, since there’s a potential irrationality to exploit there’s an interest for them in that scratch tickets might be even more profitable.
@Steven “In other words, you’re completely missing the point.”
Despite being a bit new here, I humbly suggest the contrary.
Economics in the real world should be looking at how real people really behave with real decisions.
Getting a “sure thing” is far more real, and testable, than most variety of odds/payoffs differences.
Plus the $1 mil. is about 20 years of work for an average guy –life changing if one can get it.
$10 or $100 payoffs might well not be consistent.
#1 — surveys can’t really be taken as seriously as actual decisions.
1b) how the probabilities are implemented will impact the answers.
1c) payout sizes can change preferences; in particular, big payout vs. very big payout may not be appropriate for investigating rationality.
1c) actual experience of others will impact the answers.
6) At extremes of uncertain probability 90%, risk aversion preferences may rationally dominate expected value comparisons.
I’m still a 1A, 2B person, and still believe it is rational.
I think Savage should have changed from 1A to 1B if he really thought there was a mistake, but again this is less clear. Both $1m and $5m “change your life”, so going from 2B 10% to 2A 11% makes sense.
Were I to really believe 1B, as in see 8 people win $1m and 1 person win $5m, I might switch for 1B. But that would be after I really believe such odds. I believe the $1m for sure as in for sure.
6) At extremes of uncertain probability greater than 90% or less than 10%, (the arrow-brackets were taken as html.)
Tom Grey:
6) At extremes of uncertain probability 90%, risk aversion preferences may rationally dominate expected value comparisons
In other words, you are still completely missing the point.
Thanks for your answer Steve, but I’ll continue to disagree.
Your explanation to Ron from an earlier thread bears repeating:
>>>
Ron: “What I’m saying is that the payoffs, expressed in
utils for a given person, could make a 1:A 2:B perfectly rational.”
And this is in fact precisely wrong.
Let x be the number of utils associated with 5 million dollars
Let y be the number of utils associated with 1 million dollars
Let z be the number of utils associated with 0 million dollars.
To choose A over B in question 1, it must be the case that y > .89y + .10x + .01z
To choose B over A in question 2, it must be the case that .11y + .89z < .10x +.90z
<< 1B)
Because Q2B has a higher expected value as a gamble, it’s clear I’d go with it, and also do this for other amounts: 100,000; 10k, 1k, 100. All for various gambles.
Q1A would not be my choice at $10, nor 100, nor possibly 1k.
Insofar as economics is about Human Action, in reality (not surveys), there are essentially no examples of people actually being offered Q1, so no data on actual A vs. B. (Unless you have some examples? When, in a life choice, is there ever a “sure thing”?)
If we AB type people don’t conform to current “rationality” axioms, then those axioms (about how rational people really act) aren’t good enough to develop true theories about how real people behave.
But the existence of AA & BB people, possibly even in the majority, means those axioms are good enough to be useful for many, maybe most.
Most real life sure things are the prices paid — like for a ticket, or a car, or a house, or a spouse. You don’t know what you’re getting till you have it. In getting — the sure thing is having a job, that somebody else pays you for your work. My being AB makes it sensible to me that I’m not an entrepreneur.
Thanks for the stimulation for these thoughts.
My view on our disagreement: you think only AA & BB type choices should be considered rational, I claim that until there is a definition of rational that includes AB type choices, the use of “rational” in a theory of human action will be deficient.
Another repeat of Steve, offering an envelope with $1m or the Q1B lottery:
—
Then I change my mind and say: Actually, there’s only an 11% chance I’m going to give you an envelope. *If I do give you one*, which would you rather have? He says “the second, definitely”.
Just to be sure — this is the behavior that you’re happy to call rational?
—
Yes, I say the second one, better expected value gamble.
Adding uncertainty to the “sure thing” means it wasn’t so sure. Thus, since I never have the sure $1m, not getting it by choosing 1B envelope (at 11%) and then getting the Zip won’t cause me the same regret — you (the system) wasn’t really going to give me the $1m anyway. Had I chosen 1A at 11%, you/system would likely have cheated and I wouldn’t get it then, either.
For me, I’d choose the 1B envelope for any 99% chance (probably?) or less of you actually giving me an envelope, and still choose the 1A sure thing if you put it into my hand (suitcase full of 100 stacks of $10k worth of $100 bills, like drug movies).
Of course, it wouldn’t surprise me to find out they’re all counterfeit, either.
Tom Grey: Aha! You are concerned about cheating on the experimenter’s part. I’m not sure whether this is something you’d intended in your earlier posts (in which case I failed to grasp it) or whether it’s a new twist you’re now introducing for the first time.
In any event, it leads to the natural question: Suppose you could somehow be certain of the experimenter’s honesty. Would you then answer consistently?