Can ignorance be bliss?

There is allegedly a tradition of issuing a blank cartridge to one (randomly chosen) member of each firing squad, so that no shooter knows for certain that he contributed to a death. Let’s assume that tradition really exists and let’s assume that it exists because the shooters want it. Does that prove that shooters (at least in some instances) value ignorance?

Not necessarily. It might just mean that each shooter prefers a 5/6 chance of firing a real bullet over a 100% chance of firing a real bullet. That’s not the same thing as preferring to be ignorant.

So here’s the key experiment. Offer the shooters a choice:

Under **Policy X**, one (randomly chosen but unidentified) shooter has a blank. Everyone fires, and nobody ever knows who fired the blank.

Under **Policy Y**, one randomly chosen shooter is excused from the firing squad before the triggers get pulled. Everyone else fires real bullets.

Under either policy, you (a member of the firing squad) wake up on execution morning with a 5/6 chance of firing a live bullet into a live person. If that probability is all you care about, you’ll be indifferent between the policies. But if you want to **guarantee your own ignorance** — so you’ll never be in a position of knowing you fired a real bullet — then you’ll prefer Policy X.

Now let’s play a game. I’m going to put 89 red balls, 10 black balls, and one white ball in an urn. The 89 red balls are all labeled “YOU LOSE”. As for the rest — it’s your choice. I can label them all “one million dollars”, or I can label the blacks “five million dollars” and the white “YOU LOSE”. After they’re labeled, you draw a ball and you get the corresponding prize.

Question: Have I given you enough information to make a choice? Not if you’re a lover of ignorance! In that case, you’ll want to know which protocol I plan to follow:

Under **Protocol X**, you draw a ball, observe it, and win your prize.

Under **Protocol Y**, you’re blindfolded when you draw the ball. I tell you (honestly) what your prize is, but I don’t tell you the color of the ball.

Now suppose you’ve chosen to label the white ball “YOU LOSE”. Under Protocol X, if that white ball comes up, you’re going to majorly kick yourself. Under Protocol Y, if the white ball comes up, you’re going to be told you lost, and you’re going to be able to walk away thinking “well, it was probably a red ball anyway, so this wasn’t my fault”. With Protocol B, you get your guaranteed ignorance.

Last week, we discussed (and re-discussed) (and re-re-discussed) this question: When you’re choosing how to label the black and white balls, do you care what’s on the reds? We posed two questions:

**Question 1**: If the reds are all labeled “one million dollars”, what labels do you want on the blacks and on the white?

**Question 2**: If the reds are all labeled “YOU LOSE”, what labels do you want on the blacks and on the white?

These are equivalent to Questions 1 and 2 in our first post on this subject — except for one thing. Those questions were worded in such a way as to suggest that we’ll be using Protocol Y. In question 2, if you draw a ball that says “YOU LOSE”, all you know is that you lose. You don’t know whether it’s a red ball (which was going to say “YOU LOSE” anyway) or the white ball (which only says “YOU LOSE” because of the choice you made).

According to the way economists usually think about rationality — according, that is, to the axioms written down by von Neumann and Morgenstern — any rational person will answer Question 1 and Question 2 the same way. That’s because the usual axioms only allow you to care about outcomes and the probabilities of those outcomes. They don’t allow you to care about things like staying ignorant.

Okay, then, suppose we tweak the axioms so you **are** allowed to prefer ignorance. Does that rescue you from the irrationality charge? Answer: It depends. If we’re using Protocol Y, then yes, you are now allowed to give different answers to the two questions — because Protocol Y offers you a shot at blissful ignorance when you choose the risky option in Question 2 but not in Question 1. That’s an important difference. Importantly different questions can merit importantly different answers.

On the other hand, if we’re using Protocol X, you can’t get off so easily. Under Protocol X, you don’t get the blessing of ignorance in **either** case, so there’s no remaining excuse for inconsistent answers.

Whew. This was going to be my last post on this subject, but a) I’m not done and b) this seems more than enough to digest for now. So I’ll come back to this once more after this has had a few days to sink in.

I’m not sure what you mean when you say “suppose we tweak the axioms so you are allowed to prefer ignorance.”

Are you suggesting that we weaken the reduction of compound lotteries property? If so, how exactly?

Fascinating stuff!

Policy A?

Really interesting post.

Excellent example. You’ve done a great job of addressing an issue that I’m convinced has been influencing people’s reaction to the questions.

If you draw a red ball, then you can think “Oh well, I would have lost regardless of my choice.” But if you draw the white ball, and it bears the message “YOU LOSE”, you’ll know you would have won if you had taken the other option. By playing the game this way, a person might modify their utility allocations to take into account the fact that they will feel bad if this happens. With the original problem statement, we don’t know which “YOU LOSE” ball was drawn, so we avoided this issue. Because of this, I believe, most people were comfortable with option 2.B.

But for Question 1, we have no choice. Under Option A, all of the balls are labeled “one million dollars”. We have no possibility for a “YOU LOSE” ball. Under Option B, the white ball is a “YOU LOSE” ball. So for Question 1, you have no way of attaining the bliss of ignorance. If you lose, you will know — with certainty — that you would have won had you made the other choice. This, I believe, is why many people are drawn to option 1.A.

Question 2 allows for the option of ignorance; Question 1 does not.

I just looked at your post again, and realize I did a poor job of restating exactly what you had already said. Oh well, it helped my understanding to write it. Thanks again for a great post.

@SL

Which claim # from your weekend roundup is the ignorance lover disputing?

Nevermind, I see it is #4 right there in your post. Mondays >.<

what do you prefer to drink?

a. water.

b. coffee/tea.

c. cola.

do peoples choices stem from a need/desire to cover up the taste of tasteless (or slightly contaminated) water? are the effects of drugs and drug dependence involved? clever marketing strategies?

i saw a show on coke and the suit leading the film crew around was carrying a half-full cola the _entire_ time. you never actually saw them drinking, but i think the implication was clear.

i think these experiments show that peoples choices are almost random. altruism exists. people will risk death to save the lives of strangers. those same people might just duck and cover if for some reason they didnt get enough (or got too much) sleep the night before.

i think its very interesting how we (i picked b-a) defend our irrational decision making. doc claims there is no ‘right’ answer, yet we seem to want to be in the ‘rational’ group of decision makers.

i still want the red pill.

Thomas:

SL will do a better job answering, but I think they don’t have to dispute any of the Claims. Instead, they view the lotteries as having 4 outcomes instead of three:

Lose, not knowing you would have won with the other choice

Lose, knowing that you would have won with the other choice

Win $1M

Win $5M

The utility they associate with “Lose, not knowing you would have won with the other choice” and with “Lose, knowing that you would have won with the other choice” can be different.

Thomas:

You are correct, I believe. The two ways of losing require knowing something about the other question, so you have to suspend Claim 4.

If ignorance is the only factor, the A/B effect would completely disapear if you offered $0.01 instead of $0. This is the same as labelling the white ball $0.01.

There is no longer the possibility of ignorance in either case. I have a sneaking suspicion that the effect would persist to some extent. I think people would be happier to get the $0.01 if they thought that there was an 89% chance they coul;d have got nothing.

I don’t think that people are being irrational, and I don’t think that ignorance has anything to do with it. The problem is with your hidden assumptions.

Suppose I have 3 tickets. Ticket A entitles the bearer to $1M if a black or white ball is drawn, ticket B entitles the bearer to $5M if a black ball is drawn, and ticket C entitles the bearer $1M if a red ball is drawn.

Restating your dilemma, there are 2 questions.

Question 1: If I give you ticket C, and then give you your choice of ticket A or B, which would you prefer?

Question 2: Given just the choice of ticket A or B, which would you prefer?

Ticket C is very valuable, and you could presumable sell it for $800K or more. So the questions are very different, because one is being asked of a rich man, and one is being asked of a poor man.

The problem is that you seem to have some sort of hidden assumption that your preference for a dog is independent of whether you already have a dog. That assumption is obviously false. It may be completely rational for me to prefer a dog to a cat, but if you give me a dog and ask whether I want another pet, I might prefer a cat for that second pet.

(BTW, you have a typo. Your Protocols A and B should be X and Y.)

Roger:

Are you proposing that Question 1 now has three options:

A: $1M with certainty

B: An 89% chance for $1M, a 10% chance for $5M, a 1% chance for $0

C: A 1% chance for $800K, an 89% chance for $1.8M, and a 10% chance for $5.8M

Thomas & Thomas: I believe claim 3B is the one that the ignorance lover disputes.

The ignorance lover prefers, in isolation, a “dog” (sure $1M) to a “cat” (10/11 chance of $5M, 1/11 chance of $0). Thus, claim 3A guarantees that even the ignorance lover selects 1A. However, the ignorance lover does not accept claim 3B: in the case where there is only an 11% chance of winning either “pet,” the ignorance lover notices that, in situation 2B, he will not know the difference between choosing a cat and losing, versus just happening (with 89% likelihood) not to be lucky enough to get a pet.

Steve constructed the series of claims such that you must dispute one in order to say that 1A,2B is rational.

Just to re-state the question in the way that should remove the A/B effect.

Question 1

A) A million dollars for certain

B) A lottery ticket that gives you an 89% chance to win a million dollars, a 10% chance to win five million dollars, and a 1% chance to win 1 cent.

Question 2:

A) A lottery ticket that gives you an 11% chance at a million dollars (and an 89% chance of nothing).

B) A lottery ticket that gives you a 10% chance at five million dollars, a 1% chance of 1 cent (and an 89% chance of nothing)?

If you choose A/B, then for Q2 you will know whether you got one of the 89 which were “already” nothing, or the one you chose, which was 1 cent.

This should be relatively easy to test. Personally, the version with thew 1 cent winnings doesn’t “feel” much different to me.

One minor thing, I think you switched from saying Protocol “X” and “Y” to Protocol “A” and “B”

Otherwise, very interesting

Here’s an interesting tweak on the ignorance part of question 2. I wonder if anyone has tested people with this question:

On a game show, a computer prints out two slips of paper, A and B, which the game show host, without looking at the text, seals into two envelopes, A and B. Each slip of paper has some amount of money printed on it, possibly zero. No one was able to observe the amounts on the envelopes.

The host points to a small box with a cable running to the computer, and explains that the computer is connected to a true random number generator based on radioactive decay of atoms. Paper A was generated by simulating a random draw of a ball from a hat of 100 balls, with 11 of the balls indicating a $1M prize, and the other balls indicating zero. Paper B was a simulation of 100 balls of which 10 indicated a $5M prize, and the others indicated zero.

The host explains that you may choose only one envelope, by pointing at your choice (no touching!).

“But wait,” the host says, “before you choose I must explain that after you point at your envelope, the other envelope, the one that you did not choose, will be opened and you will see the prize amount that have given up. Then you will be given your chosen envelope, and paid the prize amount, if it is not zero.”

Which envelope do you choose, A or B?

Thomas: No, I am not proposing any change to the questions.

ErickR. I don’t think that opening the other envelope would change my choice of A or B. It will probably be a 0, and if it is $1M, it is just luck of the draw. It will not influence me much because there is no way of knowing which it would be. The effect of Q1 is so powerful because you DO know which it is in advance. It is $1M.

I think the ignorance thing is only part of the story. I think that you would still get A / B from some people if you could distinguish the 89 zeros from the “white ball” zero, i.e. remove protocol Y. If you chose 2B and drew the 1 cent, you could console yourself with the idea that “oh well, I probably would have won nothing anyway.” This is despite the fact that you are making exactly the same gamble that you chose to avoid in Q1 (by choosing A). The factor here is not ignorance.

Of course, this is speculation, as I do not know for certain if people would answer A/B with these choices. However, ask yourself; if I had chosen 2B, and I had drawn the 1 cent (the white ball), would I console myself in this manner? If I answered 1A, and drew the 1 cent, I would not be able to console myself in this manner.

Even after working through this several times, I can see that there is an emotional consolation to 2B that there is not to 1B. This makes it more likely to answer 2B than 1B, and must sway the person at the margin.

Harold:

I do not think anecdotal evidence is very useful here, since I think that the commenters on this blog are extremely far from being a random sample of the adult population.

Thomas Bayes:

I just looked at your post again, and realize I did a poor job of restating exactly what you had already said.I quite disagree. I think you did an *excellent* job of restating what I’d already said. And I think there are likely to be readers for whom your restatement is clearer than the original. So thanks for this!

Harold:

If ignorance is the only factor, the A/B effect would completely disapear if you offered $0.01 instead of $0. This is the same as labelling the white ball $0.01.This is an absolutely brilliant insight. I’ve taught this material for years, and I’d never thought of posing this version of the problem. (Neither, unless they didn’t tell me about it, have any of my students.)

I have a sneaking suspicion that the effect would persist to some extent.We’ll know in a few months, because I plan to pose the question in this form to my class next semester by way of gathering some data. I think we will learn a lot from this.

“But if you want to guarantee your own ignorance – so you’ll never be in

a position of knowing you fired a real bullet – then you’ll prefer

Policy A.”

Policy A won’t achieve that goal. Firing a blank feels different from

firing an actual bullet. The gun has no (or drastically less) kick.

Take the questions #1 and #2 from 12Oct. I value $1mil at 10 Jolies (my metric of utility), $5mil at 12 J and $0 at 0 J in #2, preferring B (expected 1.2 J) to A (1.1 J). However, in #1, my utility for $0 drops to -11 J, for ending with $0 is so very painful as I know I could have had $1mil for sure. A (10 J) beats B (9.99 J). Why isn’t $0 so painful in #2? In #2, I will never know if the $0 outcome (if obtained) came from my choice. Thus, my utility for a $0 outcome is not independent of the questions.

Ron, an unnecessary side issue. The BBC reported that one of Ronnie Lee Gardner’s firing squad members had a wax bullet to give the more accurate feel. Or, if you like, how about multiple start buttons on an electronic lethal-injection delivery system? One is chosen by computer (which later erases the choice from memory) to be deactivated. Multiple people push their start buttons. Nobody knows whose push was moot, much like this issue.

Steve-

I remain perplexed at your continuous efforts to drive an

“unsustainable loss” peg into an “illogical/inconstent” hole.

As I’ve suggested repeatedly, lower the stakes. Make the winnings

$10 and $50 and most of the “illogical/inconsistent” behavior

disappears. This leaves you free to study the mechanism that comes

into play when the stakes become high, without the confusion of

extraneous factors.

I think the ball drawing game would be more interesting if you did it like this (combining some of Steve’s and Harold’s ideas):

There are 89 red balls, 10 black balls, and 1 white ball in an opaque box. There is a door in the box that will randomly release one ball into a chute that is also opaque. Basically, imagine a lotto machine with all surfaces painted over with black paint.

A ball is released into the chute, and then the ball is expelled from the chute into an opaque bag, all without anyone observing the color of the ball.

A contestant who has observed this process is handed the bag and offered the choice of two games:

A) If the ball is black or white, you get $1M. If the ball is red, you get nothing.

B) If the ball is black, you get $5M, otherwise you get nothing.

Choose game A or B.

I think these games are more interesting when the random component is already decided before the person makes their choice. Logically, it makes no difference since the person does not know the random outcome, but psychologically, it can make a difference, since some people may feel that there is a definite right and wrong choice only when the random component has preceded their choice.

@SL

“any rational person will answer Question 1 and Question 2 the same way. That’s because the usual axioms only allow you to care about outcomes and the probabilities of those outcomes. They don’t allow you to care about things like staying ignorant.”

Why can’t we replace “staying ignorant” with “staying wealthy”? The expected value of being credibly asked question 1 is significantly higher than question 2 (regardless of your decision), enough so that someone may consider themselves wealthy when asked question 1, but not wealthy when asked question 2.

Okay gang, how about this?

(I think I have this correct, and I think it will illuminate some of the concerns that are arising in the discussions.)

Question 1: Which do you prefer:

A: A 0% chance for $0; a 100% chance for $1M; a 0% chance for $0

B: A 1% chance for $0; an 89% chance for $1M; a 10% chance for $5M

Question 2: Which do you prefer:

A: An 80% chance for $0; a 20% chance for $1M; a 0% chance for $5M

B: An 80.2% chance for $0; a 17.8% chance for $1M; a 2% chance for $5M

Question 3: Which do you prefer:

A: A 60% chance for $0; a 40% chance for $1M; a 0% chance for $5M

B: A 60.4% chance for $0; a 35.6% chance for $1M; a 4% chance for $5M

Consistent with von Neumann-Morgenstern rationality, a rational decision should always pick A, or always pick B.

Based on past discussion, though, I suspect some people will answer with A for Question 1, but switch back to B for Question 2 or 3. (Wouldn’t it be worth it to pick up a 2% chance of winning $5M by only changing the probability of losing from 80% to 80.2%? Or to pick up a 4% chance of winning $5M by only raising the probability of losing from 60% to 60.4%?)

If you do switch from A to B, think about this:

There are two rooms: room (A) and room (B). 100 people will be allowed to enter each room, and 20 of those will be selected at random and given a lottery ticket. In room (A), the lottery ticket returns $1M. In room (B), the lottery ticket has a 10% chance of returning $5M, an 89% chance for $1M, and a 1% chance of returning nothing. In short, room A has ticket A from Question 1, and room (B) has ticket B from Question 1. Which room do you prefer?

Now, if you previously answered A for Question 1 and B for Question 2 (or 3), you will have to contradict yourself. After all, you’ve already said that you prefer the ticket in room (A) to the ticket in room (B), but if you prefer 2.B over 2.A, then you need to go into room (B). You are experiencing cognitive dissonance.

(If 40 people from each room are selected at random and given tickets, then you will be addressing Question 3.)

(Of course, if everyone gets a ticket in each room, then you are addressing Question 1.)

Thomas P:

Excellent point. Commenters have now pointed out a number of differences between the two questions. The conclusion I draw from all of this is that if economists and mathematicians want to model the behavior of real people in the real world, they need to refrain from simplistic models that treat situations as being equivalent when they are only equivalent in an esoteric mathematical sense, and not in reality.

ErikR: “The conclusion I draw from all of this is that if economists and mathematicians want to model the behavior of real people in the real world, they need to refrain from simplistic models that treat situations as being equivalent when they are only equivalent in an esoteric mathematical sense, and not in reality.”

The conclusion I draw is that, as has been known in the cognitive psychology literature for quite some time, “Experimental subjects tend to defend incoherent preferences even when they’re really silly.”

To the A-B defenders: Why do you think it’s more likely that economists and mathemeticians have seen an equivalence when there is none than that you don’t see an equivalence when there is one, especially given this known tendency to over-justify bad decisions?

The inconsistency almost certainly comes from people’s preferences not being linear in probabilities, which is exactly what Dr. Landsburg is claiming to violate expected utility theory. ThomasP’s point that Q1 makes people “feel rich” is irrelevant: the inconsistency remains no matter the functional form of utility, in which diminishing marginal utility of money and a person’s risk preferences would both be included.

It is far more likely that the best explanation involves the human brain’s well-known poor intuitions when dealing with probability. We like 1%

morewhen it gives us certainty than when it gives us a 10 to 11 bump. But the universe doesn’t care. 1% is 1% is 1%.Even those who disagree with this comment will, I hope, at least read the first blog entry I linked, titled “Zut Allais!”

Swimmy:

You may have missed Thomas P’s point. Having been given question 1, the expected value of your net worth is now at least $1M higher. In contrast, the EV of 2B is only $0.5M. Some people may consider themselves rich at $1M, but not rich at $0.5M. And a person may consider it much more painful to go from rich to not-rich, than to be not-rich and to lose some money to become less not-rich. In that sense, the questions are not equivalent.

The desire to stay wealthy (with the cutoff for wealthy somewhere between $500k and $1M) can also help explain why I’d answer B/B for amounts $10 and $50, A/B for amounts $1M and $5M, and A/A for amounts $500B and $2.5T.

Thomas P:

I assume, of course, that you would pick B for this choice:

A: An 80% chance for $0; a 20% chance for $1M; a 0% chance for $5M

B: An 80% chance for $0; an 18% chance for $1M; a 2% chance for $5M

But how about for this?

A: An 80% chance for $0; a 20% chance for $1M; a 0% chance for $5M

B: An 80.2% chance for $0; a 17.8% chance for $1M; a 2% chance for $5M

If you wouldn’t still pick B, I’d like to learn your reasoning in light of your recent posts. If you would still pick B, I’d like to learn your response to my previous post. (I believe there is a rational contradiction when preferring B for this question and A for the original Question 1.)

Thanks to everyone who is taking the time to participate in this discussion. I’m learning from all of you.

Thomas B:

No contradiction for a person who finds it extremely painful to go from rich to not-rich, and who considers the dividing line between rich and not-rich to be between $1M and $0.278M.

Thomas P already explained it. I’m not sure how he could explain it any better, but I’ll try briefly. Consider a person who will try to avoid, at all costs, having their net worth drop below $1M, assuming it was already at least $1M. Such a person would choose 1A. But if the same person, who we will assume had a net worth of zero before being given any questions, is offered your question, then his expected net worth is either $0.2M or $0.278M, depending on his choice. So, since he is not rich, he does not have to worry about becoming not-rich. So he may reasonably pick choice B.

Of course, if you define rational in a certain way, then such a person may not fit your definition of rational. But that is semantics. Certainly such a person is behaving in a consistent manner, and arguably, in a reasonable manner.

@Thomas B

I would pick B for both of the above questions as well as room B in your previous post. I don’t see a contradiction as long as we are willing to accept that our preferences depend on more than just probabilities and outcomes. Only the choices in your question 1 have expected values that trigger my desire to stay rich. The expected values from all the choices in your other questions are below that threshold.

ErikR and Thomas P:

For my previous post, all three questions involve a choice between the exact same pair of lottery tickets. Maybe I didn’t make that clear, so I’ll try to restate it:

In room (A) they give away lottery tickets that have:

0% chance for $0; 100% chance for $1M; 0% chance for $5M

In room (B) they give away lottery tickets that have:

1% chance for $0; 89% chance for $1M; 10% chance for $5M

Let’s assume that you prefer the ticket in room (A) over the one in room (B).

You now need to decide which room to enter. After people enter the rooms, some percentage are selected at random to receive one of the room’s tickets, and the percentage of people who receive tickets is the same for both rooms.

It seems to me that a rational decision for a person who prefers the ticket in room (A) over the ticket in room (B) would be to enter room (A).

But, when the percentage of people who receive tickets is 20%, the outcome probabilities are:

entering room (A) results in:

80% chance for $0; 20% chance for $1M; 0% chance for $5M

entering room (B) results in:

80.2% chance for $0; 17.8% chance for $1M; 2% chance for $5M

By preferring to enter room (B), wouldn’t you be saying that you prefer to receive the lottery ticket in room (B) over the lottery ticket in room (A)? And by doing so, wouldn’t that be a contradiction with your established preference for the ticket in room (A)?

One could claim that they prefer ticket A over ticket B, but they prefer a 20% chance to receive ticket B over a 20% chance to receive ticket A. But if someone said that, then I would change the game to one single room, and give them the choice between tickets after they’ve been selected as one of the 20%. They would, however, always pick ticket A in that situation because they prefer ticket A over ticket B, so this would be the same as entering room A instead of room B. But they previously said they prefer room B over room A, so I’d be confused.

Entering room (B) is a preference for the ticket in that room. It appears inconsistent to do that if you have a preference for the ticket in room (A).

Does this make sense, or have I goofed something up?

I think you are still missing the fundamental point that some people’s preferences may depend on their net worth at the time of the choice, and if the questions have different expected values of net worth for the chooser, then the very imposition of the questions has caused the chooser’s net worth to be different in each case, thus affecting the choice made.

Thomas B:

By the way, in your scenarios, I would probably hang around outside the door of the room with the higher expected value, and try to get as many people as possible to agree to pool their prize money with me, and then divide any prizes we won equally among all the people in the pool.

In the Steve’s original scenarios, if there was advance warning that I was going to get those opportunities, I would try to buy insurance. With question 1(B), I would look for someone who would take $1M from me if I won the $5M prize, nothing if I won the $1M prize, and pay me $1M if I won nothing. It should be easy to find someone to provide that insurance, since it has an EV of $90K for them — easy money for any large insurer. In question 2(B), I’d try to find an insurer who would take $4.6M if I won the $5M, and pay me $0.4M when I won nothing.

ErikR:

I think you are still missing the fundamental point that some people’s preferences may depend on their net worth at the time of the choice, and if the questions have different expected values of net worth for the chooser, then the very imposition of the questions has caused the chooser’s net worth to be different in each case, thus affecting the choice made.I think you are still missing the fundamental point that in the only state of the world where your choice matters (i.e. the state where a red ball is not drawn) your net worth is exactly the same for Question 2 as it is for Question 1.

No, that is not correct. You make your choice before the ball is drawn, and at that time your net worth is much greater for Question 1.

If you still think that these choices are equivalent, then post the proof. I say that no such proof is possible.

(BTW, a a typo remains. Your Protocol B should be Y.)

Steve:

The expected value of the net worth of a person given the opportunity in question 1, before they choose, is different than the expected value of the net worth of the same person given the opportunity in question 2, before they choose.

I guess I am missing “the state of the world where your choice matters”, since in the state of the world where I find myself, the EVs are different, and that can affect my choice.

ErikR:

“I think you are still missing the fundamental point that some people’s preferences may depend on their net worth at the time of the choice, and if the questions have different expected values of net worth for the chooser, then the very imposition of the questions has caused the chooser’s net worth to be different in each case, thus affecting the choice made.”

I must not be doing a good enough job of explaining this, because it is a very solid example and should be clarifying the issue. First, for the examples I gave, your net worth at the time of the choice cannot be different for any of the questions. It simply can’t.

Here is one final situation that is identical to the other I proposed. You enter one room, and you have a 20% chance of being offered a choice between ticket A or ticket B. Because you prefer ticket A to ticket B, I assume you will say ticket A. You know that before you enter the room, so you are choosing option (a) over option (b):

(a) 80% chance of $0; 20% chance for $1M; 0% chance for $5M

(b) 80.2% chance of $0; 17.8% chance for $1M; 2% chance for $5M

Isn’t this clear? Preferring ticket A over ticket B compels you to take (a) over (b) here. However, many people somehow say they like A over B, but want to take option (b) over (a). They just can’t do that and claim it is rational.

Thomas B:

The EXPECTED VALUE of your net worth, upon being given the opportunity of one of the questions, is different in the two questions, because the EXPECTED VALUE of the prizes are different: 1A, 1B, 2A, and 2B all have different expected values. I am referring to Steve’s original questions.

In case you do not know what I mean by expected value: it is the weighted average prize money, i.e., the sum of the products of probabilities and prize amounts.

Your questions are not equivalent, since the EVs are different.

If you want to discuss the EV of your questions, please name or number them so we can refer to them.

ErikR:

We seem to be at a stalemate, and probably won’t make much more progress on this issue. But the debate has been fun.

I am familiar with the expected values (in dollars) associated with each option; it is always higher for option B. I don’t think that changes your net worth at the time of the question, though.

The issue that you haven’t addressed is the fact that I’ve constructed a scenario for which a person sometimes prefers A to B, but other times prefer B to A, even though the question is always the same: “Which do you prefer: a ticket with the likelihoods (0, 100%, 0%) or one with likelihoods (1%, 89%, 10%) for winning ($0, $1M, $5M)?” Somehow, people are saying that the answer should be different depending on whether they are asked the question with certainty or have a 20% chance of being asked the question. Again, in both cases, the questions are identical. The only difference is the chance that you will be asked.

By the way, regardless of the chance of being asked the question, the largest expected value is associated with the (1%, 89%, 10%) ticket. It is always (.89*$1M + .1*$5M)x(probability of being offered a ticket). The alternative is ($1M)x(probability of being offered a ticket). Because .89*$1M + .1*$5M = $1.39M > $1M, that ticket always has the highest EV. Always.

Thomas Bayes: Thanks for your truly excellent exposition throughout this discussion.

Thomas B:

“I don’t think that changes your net worth at the time of the question, though.”

The reason I wondered whether you are familiar with the term “expected value” is because you keep leaving it off your paraphrases of my claim, like you did in the quote above. I did not claim the net worth was different, I claim that the expected value of the net worth is different. And as I repeatedly said, I am talking about the EV of the opportunity presented by each question (specifically, the minimum EV of the two choices for each question), not the difference in EV between choices A and B.

And I have indeed addressed why the answers can be different, because the questions are different, because the minimum EVs of the opportunities presented by each question are different.

If you got an opportunity for only one question, either question 1 or question 2, which question would you pick?

Is this an example of the framing effect, where people express different preference for the same options depending how they are framed (asked)?

Swimmy – I enjoyed the link.

@Erik R

“If you got an opportunity for only one question, either question 1 or question 2, which question would you pick?”

This is an important distinction.

@Thomas Bayes

“Somehow, people are saying that the answer should be different depending on whether they are asked the question with certainty or have a 20% chance of being asked the question. Again, in both cases, the questions are identical. The only difference is the chance that you will be asked.”

Precisely! Now that we agree on that fact, we can discuss why that difference matters.

“By the way, regardless of the chance of being asked the question, the largest expected value is associated with the (1%, 89%, 10%) ticket. It is always (.89*$1M + .1*$5M)x(probability of being offered a ticket). The alternative is ($1M)x(probability of being offered a ticket). Because .89*$1M + .1*$5M = $1.39M > $1M, that ticket always has the highest EV. Always.”

We agree with each other on the above. The point I am trying to make is that the probability of being offered a ticket matters. As that probability rises, there will come a point where my strategy shifts from trying to maximize the expected value of my outcome (by choosing ticket B) to trying to minimize the chances that my outcome is $0 (by choosing ticket A). The point where my strategy shifts is the point where no matter what option I choose, I feel wealthy simply by being credibly asked the question, or more specifically, where ($1M)x(probability of being offered a ticket) > (my I am rich threshold).

Thomas P and ErikR:

You’ve made a compelling and convincing case that you would take ticket A if you were expecting to be offered the choice between ticket A and ticket B, but you would take ticket B if you didn’t expect to be asked. (If, for instance, you thought the chance was 20% or smaller.)

By the way, though, what probability do you associate with actually being offered a choice between those two lottery tickets? Seriously. Do you think it will ever happen? Do you think there is a 50% chance? Do you think there is a 20% chance? The chance isn’t 0%, but I’ll bet you believe the chance is a very, very small, right?

So, I assume that if you were offered the choice between ticket A and ticket B, you would take ticket B. That is what you said you would do if you thought the chance of being asked was small, which it is. But didn’t this discussion get started by telling Professor Landsburg that you would take ticket A?

I apologize if I’m coming across snarky. I don’t intend to be. I’ve been learning a lot through this discussion, and I’m pretty sure there is a rational inconsistency associated with selecting 1.A and 2.B for my questions. I think the point in the previous paragraph exposes this, and by highlighting this point I’m hoping to either strengthen or weaken my position. Either way I’ll be a little smarter.

ErikR & Thomas P: Suppose your boss comes by your office today and says to you,

“I have asked the CEO of the company for permission to give you a raise, and he told me that he will issue his decision tomorrow. If I do give you a raise, I’ll give you a choice between compensation scheme A (higher bonus potential) and compensation scheme B (higher base salary). I know you will be on vacation for 2 weeks starting tomorrow, so it’ll be easier if you just tell me now which scheme you prefer – that way I can take care of the paperwork immediately upon the CEO’s decision. Are you willing to tell me now?”

Assume you understand the details of schemes A and B, and you know which one you prefer.

Would you tell the boss which scheme you prefer, or would you tell him that your decision depends on his estimate of the probability that the CEO will approve the raise? It seems to me that your choice between A and B should be independent of that probability. And yet whether your choice should be dependent on that probability is analogous to whether your choice between entering room A and room B (in Thomas B’s examples) should be dependent on the probability of drawing a ticket.

Thomas B:

I do not think you are coming across snarky. Perhaps a bit inattentive, since I have answered some of your questions, while you have not answered my recent question (which question/game would you prefer?). Why not?

I am going to start referring to the opportunity of answering a question and winning a prize as a “game” rather than a “question”, since the terminology seems confusing otherwise. So Steve’s original questions will be “game 1″ and “game 2″.

As for your new question, the chance of being offered such a game in real life will not be relevant to a person who avoids becoming not-rich. Once a person is offered a game, then the EV of the net worth of the person has increased by the EV of the game. If the person is offered a 20% chance of getting a lottery ticket, then the EV of the person’s net worth is higher by 20% of the EV of the ticket. But until someone is either offered a game, or offered a well-defined chance of playing a game, there is no need to consider the EV of the game. Why would there be? It only matters when someone is actually playing a game — just before they make their choice.

Jonathan Campbell:

Would you tell the boss which scheme you prefer, or would you tell him that your decision depends on his estimate of the probability that the CEO will approve the raise?Excellent illustration of the whole point. Thanks.

Because I am ‘Thomas Bayes’, I can’t resist.

*IF* my utility allocations depended on the probability that I am asked the question, and if I had to decide before going on vacation, then I would assign a prior that represented my uncertainty about the boss’s decision. I would then maximize the expected utility relative to my prior. I am, after all, a Bayesian.

ErikR: Sorry that I missed your earlier question. I would always prefer the first question. I would much rather be offered the choice between tickets A and B with certainty, than to have some reduced chance of being asked. And I would always take ticket B.

Jonathan Cambell:

You have not given enough information for an answer to your question. What are the details of the two raise options? What is the current salary?

You have also complicated the situation by discussing salary (recurring), rather than lump sum amounts.

Also, I do not think you are thinking of EV the same way as I am. If I need to make a monetary decision when outcomes are uncertain, then one thing I will sometimes do is compute EVs for the situation. And I can always compute an estimated EV, although I may sometimes need to estimate some unknown quantities. In the raise scenario, I would estimate the probability of getting the raise, using all information available to me, whether or not the boss gives me an estimate. But that is me. I think many people would just have an intuitive feel for the situation, and it could make a difference to their decision whether the possible raise situation will make them feel rich, and whether not getting a bonus would make them feel not-rich.

If we remove the 89% from Q2, it is clear that A / B is inconsistent. The question that remains – is it valid to remove the 89% common to both? Is it the same question?

Perhaps it will help if started without the 89%. You have Q2:

A) 11 black balls with $1M

B) 10 black balls with $5M and 1 white ball with 0.

Now, from the first question, if you chose A, I think you would have to choose A again.

What if we add 1 ball?

A) 10 black balls with $1M and 1 red ball with 0

B) 10 black balls with 5M and 1 white ball with 0 and 1 red ball with 0

There is no certainty in either case, but I think if you want to go for the safer bet, you would pick A again. If you picked the “red” zero (the added one), it is clear that this would have been the same in either case. Are we justified in ignoring the added ball from our descision making process? It makes no difference at all to the relative chances of picking the white over the black.

If you answer A/A with one ball added, it is difficult to logically justify doing so with more than 1. Where does the switch-over point lie? Adding one extra ball must at some point cause tip the balance.

@Jonathan Campbell

“Assume you understand the details of schemes A and B, and you know which one you prefer. ”

The scheme that I prefer may depend on whether or not I think I’m rich.

“Would you tell the boss which scheme you prefer, or would you tell him that your decision depends on his estimate of the probability that the CEO will approve the raise?”

If scheme A and B would both make me feel rich given a 100% chance for a raise, and one scheme maximizes expected value while the other minimizes the risk of a $0 payout, and assuming that I trust my boss’ assessment of the probability that my raise would be approved, then yes, my choice is going to depend on the probability that my raise is approved.

ErikR: In my view, no matter the details of the two options, or the current compensation, it is irrational to care what the probability of the raise is. Thus, you can, as you please, supply the details.

I don’t think the situation is complicated by using recurring payments rather than lump sums. The underlying principle we are discussing is very general: it suggests that when choosing between any things A and B in life, it should not matter what the probability is that your choice will be honored.

Let me ask the question in one more way: Suppose that you’ve decided that if the probability (P) of a raise is high (let’s say higher than 60%), you’ll choose scheme A, and if P is low, you’ll choose scheme B. Suppose that your estimate if P is low, so you choose scheme B. 1 of 2 things will happen:

1) Your raise is approved, in which case you will regret choosing scheme B (since, now that P = 100%, you prefer scheme A)

2) Your raise is not approved, so your choice was irrelevant

Notice that no matter what happens, you will either regret your decision, or your decision will be irrelevant. How can a decision that is destined to be either regretted or irrelevant be a good decision?

harold:

I’ll call your first game H1, and your second game H2. In both games, the EV of choice A is lower than the EV of choice B.

Consider a person, Richie, who will always try to avoid becoming not-rich if he already feels that he is rich. Further, assume that Richie considers himself rich if he has $1M or more, and that Richie’s net worth is zero before he is offered one of your games.

If Richie is offered game H1, then the EV of his net worth is now at least $1M, since the EV of H1A is $1M. So Richie feels Rich, and he will not risk H1B, which allows a possibility of his net worth dropping to zero, which is below his $1M threshold.

If Richie is offered game H2, then the EV of his net worth is only $0.91M if he chooses H2A. That is not enough to trigger his richness feeling. If he chooses H2B, then the EV of his net worth is $4.2M, which is enough to trigger his richness feeling, but in this case he would be risking becoming not-rich. But since with H2A he will always be not-rich, then no matter what he chooses, he risks becoming not-rich. So Richie would make his decision on another basis, such as the maximum EV choice, H2B.

This is consistent behavior, and I think it is reasonable. Whether it is rational depends on how you choose to define rational.

Thomas B:

You prefer the first game to the second game, so there must be a difference. The difference is that you are better off (relative to the second game) the moment you are offered the opportunity to play the first game. That is what I mean when I say that the EV of the person’s net worth is affected by being offered game 1 or game 2 — the EV of the net worth is higher when offered the first game. And some people may make their decisions based on the EV of their net worth at the time of the decision. I gather that the EV of your net worth at the time of your decision is not something that influences your decision. But I have heard no basis to claim that no reasonable person can be influenced by that information.

Jonathan:

I think that you are inserting your personal preferences and defining them as rational. You say you are referring to a general principle, but I am referring to a specific type of person — someone whose decisions depend upon the expected value of their net worth at the time that they make their decision. You may, of course, define someone who makes decisions on that basis as irrational, but that is a judgment, not a proof. I think it is a reasonable way to make a decision.

@Jonathan Campbell

“Notice that no matter what happens, you will either regret your decision, or your decision will be irrelevant. How can a decision that is destined to be either regretted or irrelevant be a good decision?”

While that is certainly an interesting property of my decision making process here, I don’t think it proves that it is nonsensical. People come to regret rational decisions they made all the time. If P is low and I choose scheme A instead of B, I’d feel like I’m incurring an opportunity cost for the duration of the period between making my decision and when my raise is approved or rejected.

I’m struggling to think of another example where people make decisions destined to be irrelevant or regretted. The first thing that comes to mind are people who lock their own refrigerators to prevent late night snacking. This isn’t really a great example to make my case, but perhaps I or someone else around here will come up with a better example later.

ErikR:

“I gather that the EV of your net worth at the time of your decision is not something that influences your decision. But I have heard no basis to claim that no reasonable person can be influenced by that information.”

I’ll try one more time, but I don’t think I can say anything different. Perhaps we can close with a specific agreement on our disagreement. Here is the scenario:

I’m going to flip a coin three times. If the coin shows ‘heads’ every time, then I’ll offer you the choice between one of the two lottery tickets:

A: (0%, 100%, 0%) for ($0, $1M, $5M)

B: (1%, 89%, 10%) for ($0, $1M, $5M)

If you plan to pick A, then your EV is:

($1M)*(0.125) = $125K

If you plan to pick B, then your EV is:

($1.3M)*(0.125) = $162.5K

Because this is a low enough EV, you commit to pick B because of the chance to win $5M. You’ve said this many times.

Now I flip the coins and get three heads. Much to your surprise, I then say, “ErikR, is ticket B your final answer? You can change to ticket A if you’d prefer.” At this point, the EV is $1.3M if you stick with B, and it is $1M if you change to A. You’ve said many times that with these relatively large EVs you will take the $1M and forgo the risk of losing it all. So, you will say “Thomas, I’d like to change my choice to ticket A.”

Do you disagree with anything in the two preceding paragraphs? I don’t see how you can, because I’ve based your actions on your stated preferences.

This, then, is the basis for my claim that no reasonable person who prefers ticket A over ticket B can let the probability of being asked the question affect their answer. Because they will always change their answer to ticket A if they are given the chance to do so at the time they are asked. Knowing that, why would they ever say B to begin with? I’ve put the argument in the context of EV, so, if you counter with an argument based on EVs, please be specific about where my analysis is faulty. I hope we can close this discussion with some agreement about where, specifically, we disagree.

(This example is a restatement of Jonathan’s example.)

Most of the discussion has centered on the expected value of the choices. I wonder though, is this really a valid measure to use? For example, given the choices:

A) 100% chance of $1million

B) 25% chance of $5million

Choice B clearly has the greater EV ($1.25m v $1m). Even knowing this I would choose A. Why? The EV gives me the average return over a large sampling but I only get _one_ random sample. If I choose B I don’t actually get $1.25m I get either $0 or $5m. And the odds are it will be $0.

So, my question is, is EV the right tool to use in this case even though we’re only talking about a single sample? If not what is?

“I’m going to flip a coin three times. If the coin shows ‘heads’ every time, then I’ll offer you the choice between one of the two lottery tickets:”

I stopped reading right there. My choice occurs AFTER the coin flips. So the coin flips are irrelevant to my choice.

Stephen:

I myself have recently been talking about EV, but comparing the minimum EV of the games, not just the EV of the choices.

I think you will find that the arguments most people are making for choosing A/A or B/B but not A/B do not depend on the EV of the choices. They are saying that you can use whatever basis you like for choosing between A and B, but regardless of what basis you use to make the choice, a certain type of person labeled “rational” should choose A/A or B/B for both games.

Thomas Purzyycki:

Re: whether a decision that is destined to be either regretted or irrelevant could be a good decision

You raise 2 objections:

1) You suggest that there is opportunity cost incurred (assuming a low P, and a choice of A) between the time of your decision and the time of your CEO’s decision. So if I change the problem so that the CEO decides instantaneously after you state your preference, does that eliminate this problem?

2) You put forth the refrigerator example. This seems to me to be a psychological effect, which results from your preferences actually changing over time: early in the day, you prefer abstinence from snacking; later, you prefer to snack. Of course, there are many examples in life where people’s actual preferences change, and those do provide an example of how a decision could be destined to be regretted but good. But I don’t think those examples relate to the original problem Steve posed. So I’ll modify the question to be:

“How can a decision that is destined to be either regretted or irrelevant be a good decision, assuming no changes in preferences?”

ErickR: Your working through of my “one extra ball” example is useful to me, thanks. If you have the preferences you state, then the choices seem rational to me. However, you are denying Claim Four: The desirability of a lottery depends only on the prizes and the probabilities of winning. This is one possible way of tweaking the axioms. However, I think that the discussions with Thomas B and others indicates it would be difficult to state your axiom in a consistent way. The introduction of the probability of participating in the game poses a difficult problem, I think.

Steven Coy. It doesn’t matter what the odds are – it is only inconsistencies between A and B that matter. If you set Q1 and Q2 with your odds, then I think most people would answer A/A. It only becomes an issue if some people answer A/B

Harold:

–

“. . . However, you are denying Claim Four: The desirability of a lottery depends only on the prizes and the probabilities of winning. This is one possible way of tweaking the axioms. However, I think that the discussions with Thomas B and others indicates it would be difficult to state your axiom in a consistent way. The introduction of the probability of participating in the game poses a difficult problem, I think.”

–

Well put.

Stephen Coy:

Most of the discussion has centered on the expected value of the choices.In fact, except for a few irrelevant asides, almost *none* of the conversation has anything to do with the EV of the choices. Did you read any of the discussion before posting this?

So, my question is, is EV the right tool to use in this case even though we’re only talking about a single sample?Of course not. No sensible person would claim such a thing.

If not, what is?This is like asking what’s the “right tool” for choosing between a cat and a dog. Some people like cats; others like dogs. The only “right tool” is to be guided by your preferences.

Stephen:

Your question can be used to illustrate the issue behind the debate in this way:

Q1: Which do you prefer?

A. 100% chance for $1M (EV = $1M)

B. 25% chance for $5M (EV = $1.25M)

Q2: Which do you prefer?

A. 5% chance for $1M (EV = $50K)

B. 1.25% chance for $5M (EV = $62.5K)

It can be perfectly rational to choose A for Q1 despite its lower EV. The issue arises when people say they would take A for Q1, but B for Q2. Their logic is typically based on the fact that you shouldn’t walk away from $1M in Q1, but the chances of winning nothing are so high for Q2 that it’s worth taking B. and going for the $5M. But Q2 is EXACTLY the situation that arises if you have a 5% chance of being asked Question 1. So, in light of selecting A for Q1, you have to justify why it is that would answer B to this question:

“Next Tuesday we will be drawing a name at random from a list of 20 people, and this list includes your name. If your name is drawn, you will receive one of the options from Q1. We need to know now which option you will prefer so we will be able to bring the ticket to your home if you win. Which will it be?”

As far as I can tell, no one that selects A for Q1 and B for Q2 can explain how to resolve their paradox.

(I think I’ve chewed all of the flavor from this gum.)

@Jonathan Campbell

“1) You suggest that there is opportunity cost incurred (assuming a low P, and a choice of A) between the time of your decision and the time of your CEO’s decision. So if I change the problem so that the CEO decides instantaneously after you state your preference, does that eliminate this problem?”

Touché.

“How can a decision that is destined to be either regretted or irrelevant be a good decision, assuming no changes in preferences?”

I think this will actually clarify my position. I do not assume no change in preferences. My preferences change like so: when it comes time to make my decision, if the expected values of at least one of my options are below a certain threshold (for me somewhere between $500k and $1M), the utilities provided by my options can be approximated by their expected values and I maximize my utility. When both of my options have expected values that exceed the threshold, I award bonus utility points to the option that minimizes the risk of receiving $0 payout and once again choose the option that maximizes my utility.

Going back to the original questions 1 and 2, here are sample utilities that can be assigned to the options to get my result:

1a) EV((0%, 100%, 0%) for ($0, $1M, $5M))

= 1,000,000 + 500,000 (bonus utils because above conditions are satisfied)

= 1,500,000 Utils

1b) EV((1%, 89%, 10%) for ($0, $1M, $5M))

= 1,390,000 Utils

2a) EV((0%, 11%, 0%) for ($0, $1M, $5M))

= 110,000 Utils

2b) EV((0%, 0%, 10%) for ($0, $1M, $5M))

= 500,000 Utils

At this point, it should be clear how I arrive at my decisions for these questions. As far as I can tell, we are pretty much on the same page. The only question remaining is if we can agree whether or not a reasonable person can assign bonus utility points to an option in this fashion. Obviously, I say yes if for no other reason than that there is no accounting for tastes.

Thomas Bayes:

(I think I’ve chewed all of the flavor from this gum.)I’ll have one more post on this paradox that might restore a tiny bit of flavor. And I have a whole separate paradox I’m holding in reserve for sometime a few weeks down the line.

“The only question remaining is if we can agree whether or not a reasonable person can assign bonus utility points to an option in this fashion. Obviously, I say yes if for no other reason than that there is no accounting for tastes.”

I agree, obviously.

But I think a more interesting question, for those who claim that your criterion is not reasonable, is whether they think that all or most of the people who do indeed choose A/B are unreasonable, or whether there is some other reasonable basis that they are using to make their decisions.

If the answer is that most of the A/B people are making their choice based on the ignorance criteria Steve mentioned in this post, then I would like to hear some justifications for this assertion.

Thomas Purzycki:

–

“At this point, it should be clear how I arrive at my decisions for these questions.”

–

Thank you.

Let’s try with these three lotteries:

A: (0%, 100%, 0%) for ($0, $1M, $5M); EV = $1M

B: (5%, 85%, 10%) for ($0, $1M, $5M); EV = $1.35M

C: (10%, 70%, 20%) for ($0, $1M, $5M); EV = $1.7M

All have an EV above your threshold, so you’ll add 500K bonus to the one that minimizes the risk for receiving $0 in any comparison.

Q1: Do you prefer A or B?

You’ll add a 500K to EV for A because the risk for $0 is smallest. 1.5M > 1.35M, so you’ll prefer A.

Q2: Do you prefer B or C?

You’ll add 500K to EV for B because the risk for $0 is smallest. 1.85M > 1.7M, so you’ll prefer B.

Q3: Do you prefer A or C?

You’ll add 500K to EV for A because the risk for $0 is smallest. 1.5M < 1.7M, so you'll prefer C.

In summary, A will have more utility than B, B will have more utility than C, but C will have more utility than A. Are you comfortable with this? Does this meet your expectations for what it means to assign a utility to a lottery? Does this meet your expectations for a rational decision process?

Rather that do this exercise for every possible decision process, we could agree up front on a set of characteristics for rational decisions about lotteries. We could write down a set of axioms, and then test our decision processes against the axioms. Better yet, we could see if the axioms induce a method for us to make decisions. This has been done for some reasonable axioms, and the resulting method is expected utility. Very close to what you are doing, but it doesn't allow for the algorithmic inclusion of bonus points in certain circumstances. Doing that will violate one or more of the axioms. With this example, I demonstrated that your method violates an axiom called 'Transitivity'. The previous example I used showed that it also violates one called 'Independence'.

Is it okay to violate those axioms? Sure. That's up to you. But when your method does violate those axioms, then many people will say the method is not rational because they use the axioms as their test for rationality. That is all that many of us have been doing.

Steve Landsburg:

I’m looking forward to the new gum. My jaws are tired, but I think I understand gum a little better now.

Cheers!

@Thomas Bayes

Good example. You’ve shown me that the sample utility function I gave will break transitivity. The sample used a constant function to assign bonus utility to options that minimize my chance of becoming not rich given that I already felt rich. Can that bonus utility function be modified such that it does not break the transitivity axiom and only breaks an axiom in the same way a preference for ignorance would? If it can, I would be comfortable with my choices in the same way I am comfortable with the ignorance preference explanation. If the bonus utility function can not be modified to achieve this, or if the necessary modifications would force it to look unreasonable to me, then I would be forced to reevaluate my position. Unfortunately, I do not possess the ability to pursue this any further by trying to prove that one way or another.