The rationality quiz that I posted on Tuesday has drawn a lot of comments from folks who think they can reconcile inconsistent answers by appealing to risk aversion. That’s surely incorrect. To see why, let’s start with another quiz.

**Question 0:** Which do you like better, dogs or cats?

Economists would not presume to declare either choice an irrational one. There’s no accounting for tastes.

Now I have two more questions for you:

**Question 1.** Which would you prefer:

- A dog
- A pet that has an 89% chance of being a dog and an 11% chance of being a cat

**Question 2.** Which would you prefer:

- An 11% chance of getting a dog
- An 11% chance of getting a cat

When John von Neumann and Oskar Morgenstern set out to axiomatize the notion of rationality, one of their axioms was (in essence) this: If you prefer dogs to cats, you’ll answer A to both questions. If you prefer cats to dogs, you’ll answer B to both questions.

It seems pretty hard to argue with this axiom. In Question 1, why would a dog lover ever take an unnecessary chance of winning a cat? Why would a cat lover settle for a dog? In Question 2, it’s almost impossible to imagine a dog lover choosing B or a cat lover choosing A.

Notice that **none of this has anything to do with attitudes toward risk**. It’s strictly about attitudes toward dogs and cats — and more specifically, about the consistency of those attitudes.

Now replace the dog with a million dollars, and the cat with a lottery ticket that gives you a 10/11 chance at five million dollars. Which do you like better, the “dog” or the “cat”? Again, either answer is fine. There’ s no accounting for tastes.

Now let’s revisit Questions 1 and 2:

**Question 1:** Which would you prefer?

- A “dog” (i.e. a million dollars)
- A prize that has an 89% chance of a being a “dog” (i.e. a million dollars) and an 11% chance of being a “cat” (i.e. a 10/11 chance of five million dollars).

[Notice that an 11% chance of being a 10/11 chance of five million dollars is the same thing as a 10% chance of being five million dollars.]

**Question 2:** Which would you prefer?

- An 11% chance of getting a “dog” (i.e. a million dollars)
- An 11% chance of getting a “cat” (i.e. a 10/11 chance of getting five million dollars).

If you’ve bought into the von Neumann-Morgenstern axiom, then rationality requires you to answer either A to both questions (if you’re a “dog” lover) or B to both questions (if you’re a “cat” lover).

This is a simple consistency requirement, which, again, has **nothing to do** with how you feel about risk. If you’re averse to risk, you should be a consistent “dog” lover. If you love risk, you should be a consistent “cat” lover. But no single attitude toward risk can justify acting like a dog lover half the time and a cat lover the other half.

Questions 1 and 2 here are, of course, the same questions I posed on Tuesday. If your answers (like many people’s) were A and B, then you’ve certainly violated the von Neumann-Morgenstern dog/cat axiom.

Several commenters have made the mistake of arguing that Question 1 involves a “sure thing” while Question 2 does not. Here’s exactly why that argument is wrong:

In either scenario, there’s an 89% chance your decision won’t matter. 89% of the time, you’re sure to win a million in Question 1 or sure to win zero in Question 2 — and **there’s nothing you can do about that**. Only the remaining 11% of the time does your decision have any effect on the outcome — so you might as well make your decision on the assumption that this is one of those 11 out of 100 times.

In that case, what you’re choosing between is a sure million versus a 10/11 chance of five million. That’s your choice in Question 1 and that’s your choice in Question 2. If you like “dogs” (i.e. sure things), then you should take the sure thing both times. If you prefer to gamble, you should take the gamble both times. **In the only cases where your decision matters, Questions 1 and 2 both present you with exactly the same choice between a gamble and a sure thing.** In each case, you should pick the one you prefer.

So if you gave inconsistent answers, you can’t justify them by risk aversion. Is there some other way to justify them? Maybe. I’ll post (at least) once more on this topic in the next several days.

> Is there some other way to justify them?

Well, here’s a link to a paper titled “Does Consistency Predict Accuracy of Beliefs?”, which concludes that “Economists with inconsistent beliefs tended to be more accurate than average, and consistent Bayesians were substantially less accurate.”

http://mpra.ub.uni-muenchen.de/24976/

:)

When people are given questions like this, they think about them inside a limited domain, rather than apply them to the real world.

Take the question “which would you prefer, a horse or a dog?”. I am fairly sure that given this question people imagine their lives with a dog and all that entails (walks in the park, someone who’s pleased to see you whenever you come home etc), and imagine their lives with a horse and all that entails (getting to ride on a horse whenever you want, having to muck out stables etc), and then choose which sort of life they prefer.

What they don’t consider is that you can sell a horse for a lot of money and buy a dog for much less. So pretty much everybody should say that they would prefer a horse. But when thinking about the problem they stay in a limited domain where trading doesn’t enter the picture.

Even people who prefer a guaranteed million and don’t want to risk anything, should prefer the lottery ticket option, if they could be sure they could sell it at something close to its market value.

Well I had the most to say about risk-aversion in the previous thread, mostly to identify it as an issue separate from rationality.

But it is a preference just like any other and is often a criterion for making decisions.

I am a cat lover with a severe life-threatening allergy to them. I dislike dogs immensely but in general my dislike for dogs is much less of a cost then the expected value I place on my life in the same house as a cat. If I were risk loving enough, the value I receive by choosing the option likely to give me a cat could be enough to offset the now reduced cost I associate with my risk.

I would need to choose A/A if I were risk averse or neutral, if I were risk seeking I would need to choose B/B. Because as you said, it is consistent application of decision criteria that demonstrates rationality.

But you went further and rather absolutely declared that decisions must represent preference to be rational. That “none of this has anything to do with attitudes towards risk”, “It’s strictly about attitudes toward dogs and cats”.

But isn’t this only true if you assume that there is zero cost (risk) associated with either option, that the decider is risk-insensitive?

In a case where there is a cost (risk) wouldn’t the attitude one holds towards risk become a relevant preference?

This topic has bugged me a bit because not only I chose A and B in the original post (hence irrational!) but also because professionally I advise on economic issues and hence a self perceived need to justify that follows below that my choices were not irrational by a commonsensical understanding of that word.

The original questions:

Q1. I chose A over B as I preferred getting one million dollars over a choice that probabilistically offered a higher return combined with a minuscule chance of getting nothing.

Q2. I chose B over A as my preference for certain gratification in Q1 gets overruled by a similar chance of nothing in both cases.

I guess I like a sure thing unless where the chances of a sure thing (as per my preferences) are traded off with the chances of receiving something exponentially (a factor of 5 in this case) larger.

I cannot quite figure out why the above even if mathematically expressed is inconsistent.

–

In this post I chose A in both cases because I prefer a dog over a cat. However, I do not prefer cats less than a dog – I do not prefer a cat at all. Hence in Q2, I chose A because I prefer having a dog or not having a dog at all over having a cat.

So I don’t quite see how the questions in this post are parallel to the original post. Dogs/Cats differ significantly to one million/five million dollars with the obvious differences arising in a consumer consumption choice according to individual preferences.

–

Also aren’t degrees of risk separate consumer decisions by themselves according to individual preferences?

–

Also a sure thing is a sure thing irrespective of the decision maker’s input.

–

I would like to be proved wrong though – that means I would have learnt something!

Cheers

David

My gut feeling was that this effect is caused by $5M not being ‘worth’ five times as much as $1M and the psychological cost associated with being greedy and losing (which would be a greater cost in the first scenario than the second). However if you assign numbers to those values they have to be pretty funky to get a result where choosing A in the first scenario is consistent with choosing B in the second. For example, if the value of $1M is 1 and $5M is 1.2, and the cost of being greedy and losing is 1.1 in the first scenario and 0.01 in the second then it works out that choosing A in the first and B in the second is consistent. However that would mean that the cost of beating yourself up for losing out on the sure thing is greater than the value of the sure thing and I don’t buy that. So I’ve gone from convinced it had some rational basis to being fairly happy that it doesn’t. I would still choose A in the first and B in the second though :)

Steve-

Yes, but you’ve cheated.

1. You’ve replaced a difference in degree (more money) with a

difference in kind.

2. You’ve entirely eliminated the possibility of losing.

3. You’ve lowered the stakes.

We could argue about change 1 (for some people $1M is infinity. So

is $5M.). However, either change 2 or 3 is sufficient by itself to

change the results.

Take #2. You seem to be arguing that adding the uncertainty to the

substitute for “cat” is valid and doesn’t change anything. While it’s

true that the resulting questions are back to the old challenge, it’s

not true that this doesn’t affect things.

In your new problem set let’s change 1B to be consistent with the old.

B 89% chance of being a dog, 10% a cat, 1% nothing.

First, we have to raise the stakes. This is a parent giving their

smart kid a chance to own a pet, so it may be the sole chance.

Suddenly, you’ll see some people choosing 1A, 2B. What’s the logic?

“I do prefer a cat to a dog. It’s a mild preference, though, and if

this is my only chance to have a pet, I don’t want to miss my sure

chance of getting one. That’s 1A.” The 1B answer is obvious. Note

that the uncertainty factor is critical. With it, there’s a motive

to go for A. Without it, there’s no such motive.

@Adam

Those utilities don’t really seem to be all that out of whack to me. If someone gave you a suitcase with a million dollars, but on your way to the bank, you accidentally dropped it into a pit of molten lava, does it really seem like that much of a stretch to believe you’d be less happy after losing the money than you were before you got it?

Uh, make the above “The 2B answer is obvious.”

Thinking about EricKs comment about regret on the other post. Lets say you choose B/B, because you don’t mind a bit of risk. Lets also say the lottery ticket you get is the loser. I would like to explore how you would feel in Q1 and Q2 situations. Lets say a couple of hundred people are doing the same thing, equally divided between As and Bs and you are all gathering in a big hall for your prizes. One hall for Q1 and one for Q2.

Neuman and Morgenstein would have it this way. In the Q1 hall (if you answered Q1), 89 of the “A”‘s and 89 “B”‘s who drew the $1M tickets are quietly ushered out of the door and given their $1M out of your sight. In your hall, there remain 11 “A”‘s and 11 “B”‘s. The prizes here are given out, 11 getting $1M, 10 getting $5M and you getting nothing.

What if you answered Q2? In the Q2 hall, 89 “A”‘s and 89 “B”‘s are quietly ushered out of the hall, and told they have got nothing. In your hall, there remain 11 “A”‘s and 11 “B”‘s. The prizes here are given out, 11 getting $1M, 10 getting $5M and you getting nothing.

In your hall, it is exactly the same for both questions, so the theory says we should feel the same, and hence answer both A or B for both questions, not A/B.

Lets re-play the scene, but the 89 people are not ushered away for their $1M or $0.

There you are in the Q1 big hall, full of hope. 100 “A”‘s and 89 “B”‘s go up to get their $1M. Applause! 10 “B”‘s get $5M. Hooray! And poor old you get naff all. As you all gather in the hall after, you feel really, really bad. Everyone else is celebrating and there’s you, on your own, the only one with nothing. “Oh, why oh why didn’t I pick A”, you moan to yourself, regretting your choice. Every time a bill comes in for the rest of your life you will remember this moment, and curse.

Meanwhile (in an alternative universe), in the Q2 hall. 11 “A”‘s go up for their $1M, 10 “B”‘s get their $5M, and the other 179 all pat one-another on the back and say, “oh well, better luck next time”. “If I had picked “A”, I still probably wouldn’t have won”, you say to yourself.

The fact is, we don’t think of it the way Neuman and Morgenstien would have us think. We think like the second option. So why does it matter to us whether the other 89 got $1M or $0M? Why is our regret so much greater if “the other” 89% got something?

Ron:

“I do prefer a cat to a dog. It’s a mild preference, though, and ifthis is my only chance to have a pet, I don’t want to miss my sure

chance of getting one. That’s 1A.”

It’s also 2A.

One way to explain the seemingly irrational behavior is to assume a sort of many-worlds interpretation of probability, i.e. to suppose that for any gamble such as this, all possible outcomes will be realized (in different universes), with the more likely outcomes occurring in more universes. If you assume this, you can justify a 1A,2B response by saying “given that I will be rich in the future in the sense that that all future versions of myself will have pets, I prefer dogs. Given that I will be poor in the future in the sense that the minority of future versions of myself will have pets, I prefer cats.”

Ultimately, this sort of reasoning is not very convincing to me, but I think it serves as an example of the only sort of reasoning that justifies a mixed response: one must somehow care about interactions between mutually exclusive possible future occurrences when making choices.

I asked this before without an answer. I’ll try again. I really am curious . . .

I understand how the maximization of an expected ‘value’ or the minimization of an expected ‘cost’ allocation leads to either answer A or answer B in both questions. This is because the differences in probabilities are the same, so the difference between any expected values for any cost (or utility) allocation will be the same. Decisions based on this rationale are determined by computing some function of the probability distributions and consistently selecting the answer for which the result is largest (or smallest). And for this approach, you can assign any values to the three outcomes ($0, $1M, $5M). Any relationships between the three values are valid; they just need to be the same for any question that is asked. (I can place +10 ‘value’ on $0, -100 ‘value’ on $1M, and 0 ‘value’ on $5M. It is up to me, but, after I do, I will select either A or B for both questions.)

But what if my objective in selecting answers is to pick the one that has the most uncertainty associated with it? I don’t care which result I get, I take joy from having the largest element of surprise in the final result. To quantify the element of surprise, I use Shannon’s information entropy, which is a well-accepted measure of uncertainty for a probability distribution. In this case, I would select answer B for question 1, and answer A for question 2. If I share with you this objective, and the mathematical way that I compute the uncertainty associated with the options, then you can consistently predict the answers I will provide. Why is it not considered rational to consistently choose options that maximize (or minimize) uncertainty?

By the way . . . I suspect the answer to my question is related to an issue that has come up many times during this discussion.

A condition for making rational decision must(?) be that your ‘satisfaction’ with the result should be independent of the question. That is, if I receive $1M as a result of question 1, then I should be just as satisfied as I would be if I received $1M as a result of question 2. The uncertainty criterion I suggested is a consistent rationale, but it is based on the nature of the questions, not on the results.

I think this is a good example of the difference between economists/mathematicians and scientists.

In the case of scientists, the priority would be to observe and gather data in order to formulate a theory that describes the phenomenon. A scientist would be interested by the observation someone made in the previous comments that if you got to do the first gamble multiple times, many people would switch their answer from A to B.

In the case of economists and mathematicians, the priority seems to be to try to persuade people that their choices are wrong or inconsistent, rather than studying the phenomenon. They are not very interested in the observation that whether the gamble is a one-time deal or not makes a difference in many people’s answers.

Although I picked the “consistent” choices in your previous quiz, I think you’re missing something fundamental about human preferences, and the dog/cat choice might highlight it. For many people, “dog” and “cat” may be quite disjoint – for example, I like cats and dislike dogs. However, money is money, and people’s feelings about the money “dog” and the money “cat” are probably all the same, except for chance and quantity. More importantly, money combines, and it’s all just money. If you get a million dollars plus five million, you have six million, and it doesn’t matter that it came in two separate parts; if you have a dog and cat, they’re two separate, different things.

Here’s why this matters: One’s desire for ten more dollars given that one will already receive 20, can be very different from one’s desire for ten more dollars given that one will receive 5 first. And so on. And that’s perfectly rational. It’s not the same thing to receive 10 more dollars in either case. It may be to you, and you may be able to come up with a very logical argument for it, but to many (perhaps most) people, the two have different value.

I suspect that this problem is caused by one of the properties underlying real numbers. In using real numbers to describe reality we assume that those properties of real numbers also hold in the reality we describe. So if A + B = B + A, but in reality A + B =/= B + A it seems to me that the method to describe (not the description itself) is the problem.

ErickR. I think the economists approach is that we have theory of how people will behave based on certian axioms, such as Neumann and Morgenstern. Lets test it and see if that is how people really behave. This is one such test, where the answer appears to contradict the axioms. The economist must either work out why the test does not contradict the axioms, or refine the axioms.

Taking the gamble in each question is the same. The only difference is what happens to the other people, or to put it another way, what would have happened to us if had made the other choice. If we were to lose the gamble (which is the same in both cases, don’t forget), we are regretful if most of them get $1M. If most get nothing, we are not.

This explains the choice of A/B. The economist must now work out if this is consistent with the axioms or requires new ones.

Here’s an attempt at quantifying what I guess people are feeling intuitively when they choose A then B. Several commenters have already alluded to this feeling, but I want to try to put numbers to it.

In the first question, you have a unique situation that never occurs in the second. If you choose B in the first question, there is a 1% chance that you will be sitting there, KNOWING FOR CERTAIN that if you had chosen A, you would have been a lot richer. That is a very undesirable outcome for many people. Let’s say it is really painful for some people, and worth negative two million units of contentedness (conts). It does not really matter to most people how much richer you would have been, just that you would have been rich, and because of your “incorrect” choice, you know that you are not rich. Assume that $1M is worth a million conts, and $5M is still rich, so only worth slightly more, say 1.2 million conts. Then choice A has a value of a million conts, and choice B has an expected value 0.99 million conts. So people who feel this way choose A.

But here is where the interesting psychological phenomenon occurs. In question 2, there is no possibility, for either choice, of an outcome where a person sitting there, knowing for certain that they lost out on being rich because of their choice. They will just think, well, I would have probably lost if I had chosen differently. So, choice A is worth 0.11 million conts, and choice B is worth 0.12 million conts. So people who feel that way choose B.

Apparently it is possible to hypothesize people who feel a certain way, quantify those feelings, and come up with a consistent explanation for why such people make the choices that they do.

Cos:

Here’s why this matters: One’s desire for ten more dollars given that one will already receive 20, can be very different from one’s desire for ten more dollars given that one will receive 5 first.Here’s why this doesn’t matter: In the example at hand, it is never the case that one will “already receive 20″ (or any other amount).

In the 11% of cases where your choice matters, you’re starting from zero in Question 1 and you’re starting from zero in Question 2. The choice is between a million for sure and a 10/11 chance of 5 million — starting from exactly the same zero in each case.

Harold:

ErickR. I think the economists approach is that we have theory of how people will behave based on certian axioms, such as Neumann and Morgenstern. Lets test it and see if that is how people really behave. This is one such test, where the answer appears to contradict the axioms. The economist must either work out why the test does not contradict the axioms, or refine the axioms.Exactly. And in my next post on this subject, I will talk about how one might refine the axioms.

The axiom in question is the independence axiom, which states:

If lottery A is preferred to lottery B, then lottery A and lottery C is preferred to lottery B and lottery C.

Instead of throwing this axiom out, we can just think of the Allais paradox as a super-stylized example that takes advantage of the fact that people are really bad at reducing complicated lotteries to simple ones. The question shouldn’t be how can we change the axiom to be consistent with the Allais paradox, but rather whether the Allais paradox even matters. How often does a situation that gives rise to the Allais paradox come up in market settings?

I am indifferent between having a cat or dog, but not both. Can I choose 1)a and 2)b?

I take issue with the implicit assumption that a person has to have a single answer to question 0 above. The answer is going to depend not just on the person asked, but also on that person’s situation. What would I rather have, a dog or cat? Well, do I have to take care of the animal myself? If so I’d prefer a cat. Do I live near a dog park frequented by attractive single females? In that case my allegiance switches to dog…

Going back to the million dollar questions, in question 1, you are asked to imagine yourself in a situation where you know you are very likely or certain to become rather wealthy. In question 2, you imagine yourself in a situation where there is a much lower likelihood of becoming wealthy. Despite the fact that you have no control over 89% of the outcomes, they do matter! As far as I’m concerned, a person asked questions 1 and 2 is asked to place themselves in the shoes of two very different people, leading us back to the fact that there is no accounting for tastes. Therefore economists shouldn’t declare you irrational for reaching different answers for each question.

Thomas Purzycki:

In the 11% of cases where your decision makes a difference, you are equally wealthy facing question 1 and question 2.

Cornelius:

I am indifferent between having a cat or dog, but not both. Can I choose 1)a and 2)b?Only if you’re equally happy choosing 1B and 2A.

JLA: Excellent comment. I’ll elaborate on this next week.

As Professor Landsburg has pointed out, the following statement is true for both of the questions:

If you pick Answer B instead of Answer A, you will have (i) one more chance in 100 of receiving no money; (ii) 11 fewer chances in 100 of receiving $1M; and (iii) 10 more chances in 100 of receiving $5M.

Professor L is telling us that this fact should cause a person who uses rational thought to select Answer A or B for both questions. Others seem to think that something more than the linear difference in probabilities should be used as part of the rationale for decision making, and, implicitly, they want to include a nonlinear factor that depends on the values of the probabilities.

If rational thought should only consider linear functions of the probabilities, then this debate should be over. ‘Linear’ reasoning results in Answer A|A or B|B, not A|B or B|A. The debate seems to be about the use of nonlinear functions of the probabilities. So, can someone explain why nonlinear functions of the probabilities are not considered valid as the basis for rational decision making? I suspect this has been considered carefully, but I am not familiar with economics literature.

(As an aside, thanks to S.L. for taking the time to suggest and discuss these issues on your blog. They are informative and fun.)

Thomas Bayes:

So, can someone explain why nonlinear functions of the probabilities are not considered valid as the basis for rational decision making? I suspect this has been considered carefully, but I am not familiar with economics literature.Nobody is ruling out nonlinear functions of the probabilities. We are starting with just one assumption, which has nothing to do with linearity.

Maybe this will make things clearer (let me know if it does, because if so I’ll include it in the next blog post).

Start with this question. I have here a lottery ticket that returns five million dollars 10/11 of the time. Now which would you prefer?

A. An 89% chance of a mystery prize, and an 11% chance of a million dollars.

B. An 89% chance of a mystery prize, and an 11% chance of the lottery ticket.

The only assumption that goes into the argument is this: Your choice between A and B should not depend on the value of the mystery prize.

Any further inferences you want to draw about “not allowing nonlinearities” is *not* a separate assumption — it is a conclusion that *follows* from this assumption.

@SL

I do not dispute that, but I fail to see how it addresses my point that the 89% of outcomes you do not control may have some bearing on your decision. Let’s simplify questions 1 and 2:

Q1: Given X, would you prefer A or B.

Q2: Given Y, would you prefer A or B.

Is a rational person required to answer either A, A or B, B?

Prof.–

I recognize that it seems difficult for me to be critical without also offending you but I will do my best to maintain civility…

What work of John von Neumann and Oskar Morgenstern, specifically, are you basing any of your stated conclusions?

In Theory and Games of Economic Behavior, von Neumann and Morgenstern explicitly state that rationality reflects decisions seek to “maximize satisfaction”. And that the potential value of satisfaction obtained by a choice can , for comparison with other choices, be factored against the probability of achieving it.

“Sometimes uncontrollable factors also intervene, e.g. the weather in agriculture. These however are purely statistical phenomena. Consequently they can be eliminated by the known procedures of the calculus of probabilities: i.e., by determining the probabilities of the various alternatives and by introduction of the notion of ” mathematical expectation.” Cf .”

vN & O’s axioms were specifically for the purpose of determining optimum actions in zero-sum contests. Competitive contests where the decisions of one or more other players will effect your maximum and minimum possible payout.

On matters of non zero-sum games, such as asking someone their preferences for milk, coffee, or tea and them quantifying them by offering a glass of milk or a 50-50 chance of one or more of the other options, they deferred to the useability of several prior methods and theories (namely the Austrian School’s.

“We feel, however, that one part of our assumptions at least that of

treating utilities as numerically measurable quantities is not quite as radical as is often assumed in the literature. ”

vN & O pay only a passing and disinterested mention to contests like your puzzle. In their last edition of ToGaEB they call for further development of Subjective Probability and later offer approval of works on the subject written by Schmeidler and Fishburn. For these types of contests (lotteries, roulette wheels, etc)Fishburn in particular adds to the vN & O axioms consideration of many of the things brought up here. Risk aversion, set-theory, etc….

So if risk-aversion and subjective valuation of cost/benefit/probability affects perception of risk isn’t it possible for someone who values $5m and $1m equally to rationally choose 1A and 2B?

Thomas Purzycki:

Q1: Given X, would you prefer A or B.Q2: Given Y, would you prefer A or B.

This is not a fair restatement of the questions. A fair restatement would be “Given X in one possible state of the world, would you prefer A or B in some *completely distinct* possible state of the world?”

Steve:

I’ll try to use your new example to describe the issue I’m concerned about. There are four probabilities I’ll use to compare these two options: P0, PM, P1 and P5. (These are the probabilities for 0 dollars, Mystery prize, $1M, and $5M.)

For Option A:

P0 = 0; PM = .89; P1 = .11; P5 = 0

For Option B:

P0 = .01; PM = .89; P1 = 0; P5 = .1

One way to decide is to associate some utility with each outcome (U0, UM, U1, U5), and then compute

Utility = U0*P0 + UM*PM + U1*P1 + U5*P5.

This is how I support all of the assertions about rational thought that you’ve made in this post.

Another way to decide, though, is to associate a more general utility function of the form

Utility = f0(P0) + fM(PM) + f1(P1) + f5(P5),

where, for example, I might pick the functions of the probabilities to be

f0(p) = -p

fM(p) = 0

f1(p) = 10*p/(1-p)

f5(p) = 20*p/(1-p)

(If we are uncomfortable with infinite utility, we can add a small value to the denominators for f1 and f5.)

With this utility calculation, I’ll pick B if I don’t know the mystery prize. But, if you tell me the mystery prize is $1M, then the new probabilities become

A: P0 = 0; PM = 0; P1 = 1; P5 = 0

B: P0 = .01; PM = 0; P1 = .89; P5 = .1

and the nonlinearity for f1(P1) will cause me to select option A. This would not have happened if I used linear functions of the probabilities, but the nonlinearities in my utility function always cause me to go for a sure 1 or 5 million dollars.

What is the problem with using a utility function like the one I described? I think this type of nonlinearity is the thing that many people are implicitly struggling with. They are putting nearly infinite value in an option that has probability 1 for $1M.

Thomas Bayes:

What is the problem with using a utility function like the one I described?The problem is that it leads to behavior that violates the axiom I stated.

Your utility function is not ruled out by assumption; it is ruled out as a *consequence* of a more fundamental assumption.

Epiphany! I just learned something.

Yes, I do recommend that you follow up with a post on this. Among other things, I’d like to learn the motivation for the axiom.

Even when you fairly restate the questions I tried to simplify, I still feel like X and Y could matter. If I knew for a fact ahead of time that I would wind up in the distinct possible state of the world where my decision mattered, I would choose A, A or B, B, but not without such omniscience. I guess I’m breaking the independence axiom that JLA mentioned above. I look forward to your elaboration on it next week so I can decide if I am at peace with mine illogic.

—

“If lottery A is preferred to lottery B, then lottery A and lottery C is preferred to lottery B and lottery C.”

—

Okay, I see how this statement induces a decision function that is linear in probability. But if you started by ruling out nonlinear decision functions, then this statement would follow from the definition of linearity. In other words, this statement is only true if you use linear decision functions. But, for many people, it appears that nonlinear decision functions align better with their emotions, so what is fundamental about this axiom other than the fact that it follows from the use of linear decision functions? (I hope this is a topic for a subsequent post . . . )

Thomas Bayes: When you talk about linear decision functions, you don’t mean linear in payoffs, you mean linear in utility, which is a far more general condition, since the utility function could take any of very many forms. So “linearity” is not nearly as restrictive a condition as a casual reader of your comment might be led to believe.

The usual sequence of logic is:

Assumption: People’s preferences satisfy certain axioms, as set down by von Neumann and Morgenstern.

Theorem (given the assumption): Each person maximizes the expected value of some utility function. (“Expected value” is where the linearity comes in.) (The utility function might differ across people.)

Observation: In certain surveys (such as the one I’ve posted), people’s preferences violate the theorem (and therefore violate the assumption).

Solution: Either A) argue that those survey responses are not indicative of the way people behave in the real world or B) tweak the assumption. My post next week will be about how you might tweak it.

And again, I would love to know at exactly what point my understanding of von Neumann and Morgenstern is failing me. The book was a hit for its day and is quite readable (as well as available free online).

But from what appears to be an intentionally clear and careful explanation of the difference between decisions made in a social economy and decisions made in … well exactly the conditions of your puzzle, that “Theory of Games and Economic Behavior” contradicts nearly every one of your conclusions.

First and foremost is that ToGaEB asserts that in decisions such as your puzzle, what he illustrates with a Crusoe-Economy, the problem to solve for is satisfaction maximization.

“Thus Crusoe faces an ordinary maximum problem, the difficulties of

which are of a purely technical and not conceptual nature, as pointed out.”

“The individual who attempts to obtain these respective maxima

is also said to act “rationally.” But it may safely be stated that there”

Second, you include the assumptions as elements of the theorum. This is something vN&O explicitly reject. Assumptions about the nature of preferences are made to cover up the deficiencies of the theorum and are made to allow study of how the theorum relates to other aspects of the problem.

“Every participant can determine the variables which describe his own actions but not those of the others. Nevertheless those “alien ” variables cannot, from his point of view, be described by statistical assumptions. This is because the others are guided, just as he himself, by rational principles whatever that may mean and no modus procedendi can be correct which does not attempt to understand those principles and the interactions of the conflicting interests of all participants.”

vNeumann and Morgenstern accept that in a puzzle such as yours a person with a preference for variety, for whatever reason, may rationally never choose the same answer-letter twice in a row. Or a preference for poverty will rationally choose B/A because those are the options statistically most likely to leave him “not rich”.

The whole point of the axiomization you are citing is that the conclusions you are drawing can only be made about decisions in a social economy situation which is the exact opposite of the situation posed by your quiz.

I concede that my lack of understanding may derive from my being an idiot, and I would not ask you to waste time on such a lost cause. So I should think that even the minimal response of submitting a few page numbers from vonNeumann and Morgenstern’s book (the 3rd edition is freely available) or any other academic reference. I attend one of the oldest universities in the country (est. 1819) I have never stumped the library for a source.

Benkyou:

Second, you include the assumptions as elements of the theorum. This is something vN&O explicitly reject. Assumptions about the nature of preferences are made to cover up the deficiencies of the theorum and are made to allow study of how the theorum relates to other aspects of the problem.I am baffled by what you could possibly mean here. All theorems require assumptions. This particular theorem says that if your preferences satisfy certain axioms, then they can be represented by a utility function. Of course if you were to disallow all assumptions, you’d never be able to prove anything.

So I should think that even the minimal response of submitting a few page numbers from vonNeumann and Morgenstern’s book (the 3rd edition is freely available) or any other academic reference.Sorry that I don’t have page numbers for you. I teach this stuff every year (without using a textbook) and am familiar enough with it that it’s been a long time since I’ve looked at original sources. But I am 100% sure you can find it in any standard graduate-level theory text, e.g. the one by Mas-Collel et.al.

Which particular theorem? Minimax seems to be the only work of von Neumann that would apply, it was expanded into ToGaBE with Morgenstern. I cannot find any other relevant theorem they collaborated on.

But ToGaBE says, “We wish to concentrate on one problem which is not that of the measurement of utilities and of preferences and we shall therefore attempt to simplify all other characteristics as far as reasonably possible.” They made assumptions about how preferences would be measured or applied because doing so let them skip the work of designing a model for measuring them. “But it may safely be stated that there exists, at present, no satisfactory treatment of the question of rational behavior. There may, for example, exist several ways by which to reach the optimum position; they may depend upon the knowledge and understanding which the individual has and upon the paths of action open to him.”

Because this particular theory is particularly concerned with a decision environment in which the choices of other actors changes the nature of the optimal solution away from maximizing satisfaction.

I have Mas-Collel on my e-book reader, microeconomic theory, I’m curious to read how he relates von Neumann’s axioms to non-zero-sum games.

I believe that SL is just speaking of the axiomatization of utility at p. 26 & 27:

http://www.archive.org/stream/theoryofgamesand030098mbp#page/n49/mode/2up

In the next pages vN & M analyse the problems with including an “specific utility of gambling”, saying that:

“We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate. Since (3:A)-(3:C) secure that the necessary construction can be carried out, concepts like a ‘specific utility of gambling’ cannot be formulated free of contradiction on this level.”

Benkyou: Minimax is a theorem about strategic interactions between two or more agents. We are talking here about decision-making by a single agent.

Mariano M. Chouza: Yes, the theorem I’m referring to is precisely the theorem that says “(3:A)-(3:C) secure that the necessary construction can be carried out”.

Why is it irrational to have this preference: “I prefer not to gamble at all. But if I am *forced* to gamble, then I prefer to choose the game with the highest expected return.” Here’s how I read the original questions:

(1) Do you want to participate in a gambling game involving a chance of a bad outcome?

A. No.

B. Yes.

(2) I’m going to *force* you to participate in a gambling game involving a chance of a bad outcome. Given that you must place a bet, would you prefer:

A. a bet with a smallar positive expected value, or

B. a bet with a larger positive expected value.

Glen Raphael:

Why is it irrational to have this preference: “I prefer not to gamble at all. But if I am *forced* to gamble, then I prefer to choose the game with the highest expected return.”Of course I’ve answered this in the posts. If you can pinpoint which part of the explanation was unclear to you, I’ll be glad to try making it clearer.

Glen Raphael:

As I just indicated, it’s hard for me to be sure what point you’re missing, but let me take a stab at it:

89% of the time your choice does not matter. 11% of the time it does. If your first priority is not to gamble, then you’ll want to choose the non-gamble in both of the cases where it matters. That’s A both times.

You can’t divide it up that way because whether that 89% involves a win or a loss affects what the regret-minimization strategy is regarding the overall choice. I have two preferences: (1) don’t gamble, (2) if you do gamble, do so in such a way as to maximize EV. A 0% chance of losing makes it possible to *entirely satisfy* preference #1; a ~90% chance of losing makes it impossible to satisfy preference #1, so preference #2 becomes more salient and drives the decision process. You get two different answers because there are two different motives involved.

A preference to *not gamble at all* given a single one-off opportunity is not the same thing as a preference to *gamble as little as possible in every situation*. If you have a ~90% chance of losing, *you are gambling*, regardless of how one tweaks the odds regarding what happens in that other ~10%.

Glen Raphael: I’ll post more on this in the coming week. I think you are on to something important but don’t have it quite right, for reasons I’ll explain in the next post.