What Is It Like to Talk Batty?

Sometimes I think we should license economics writers.

Thomas Nagel is a prominent philosopher (author of the provocative and widely anthologized essay What is it Like to be a Bat?) who’s just reviewed Daniel Kahneman’s new (and excellent) book in The New Republic. (Fun fact: When I stepped off an airplane at Heathrow last week, the first thing I saw was a limousine driver holding a sign that said “Daniel Kahneman”. This, incidentally, was my final issue of The New Republic, due to their criminally evil subscription practices — more on that, perhaps, later this week. ) Here is how Nagel describes what he seems to think is orthodox economic theory:

Most choices, and all economic choices, involve some uncertainty about their outcomes, and rational expectations theory, also called expected utility theory, describes a uniform standard for determining the rationality of choices under uncertainty…

The standard seems self-evident: The value of a 50 percent probability of an outcome is half the value the outcome would have if it actually occurred, and in general the value of any choice under uncertainty is dependent on the values of its possible outcomes, multiplied by their probabilities. Rationality in decision consists in making choices on the basis of `expected value’, which means picking the alternative that maximizes the sum of the products of utility and probability for all the possible outcomes. So a 10 percent chance of $1000 is better than a 50% chance of $150, an 80% chance of $100 plus a 20% chance of $10 is better than a 100 percent of $80 and so forth.

AAAAGGGHHH! Even on the Internet, it’s rare to see quite so much ignorance packed into so few words. Where to begin?

First, “rational expectations theory” and “expected utility theory” are not two names for the same thing; they are two names for entirely different things. Rational expectations is a theory of equilibrium (i.e. the outcomes that you get when decision-makers interact); expected utility is a theory of optimization (i.e. the choices made by an individual decision-maker). When Nagel throws around probabilities of 10, 50 or 80 percent, expected utility theory (like Nagel) takes those probabilties as given; rational expectations theory (which has nothing whatsoever to do with anything Nagel is talking about) tries to explain where those probabilities come from.

But far far far more importantly, Nagel’s account of expected utility theory is not just wrong but ridiculously naive. Who is to say that a 10 percent chance of $1000 is better than a 50 percent chance of $150? You might prefer one and your cousin Jeter might prefer the other. Orthodox economics pronounces neither of you irrational.

What Nagel is computing is not an expected utility, but an expected value. You’d have to be an extraordinarily dull observer of human nature to think that people routinely act to maximize expected value, and an extraordinarily dull student of economics to think that economists would ever make such a silly claim.

Indeed, the entire field of finance consists of studying the ways in which people trade off expected value against various measures of risk. To assert that only expected value matters is to assert that the entire field (along with much of the rest of economics) reduces to a triviality, that “risk management” is a pointless activitiy, and that insurance markets don’t exist.

What economists do believe (or act as if they believe) is that people maximize expected utility, which is a different thing altogether. To assess the value of a 50% chance of $100, we multiply 50% times a number called the utility of $100, and we predict that when confronted with multiple choices, they choose the one that maximizes their expected utility. This is a far better theory than Nagel’s bastardized version, for several reasons. First of all, it’s not an assumption; it’s a conclusion. We start with some very simple axioms about human preferences and deduce that people behave so as to maximize expected utility. Second, there’s a vast body of empirical work that’s largely compatible with this theory’s predictions.

One of the most widespread misconceptions among non-economists is that these “utility” numbers represent some subjective measure of well-being. (Because it fosters this misconception, the word “utility” was probably a poor choice from the get-go, but we’re stuck with it.) All the relevant theorem says is that people act as if they assigned a number to each possible outcome (e.g. $100 is assigned a 2, $1000 is assigned a 3, etc) and then act as if their choices were based on these numbers, multiplied by probabilities a la Nagel (so that a 50% chance of $1000 is assigned a 1.5, making it inferior to a sure $100). The numbers, according to the theory, are different for different people, which is why, confronted with identical circumstances, different people make different choices.

(Has Nagel never noticed that when confronted with identical circumstances, different people make different choices? Has he not noticed that his theory predicts otherwise? Or did he think that somehow this phenomenon had somehow gone unnoticed by the entire economics profession?)

Kahneman’s book is a wonderful survey of the ways in which orthodox economic theory can fail. It’s important and surprising work. But if orthodox economic theory looked anything like Nagel’s caricature, then its failure would be no surprise and hardly worth writing about.

Print Friendly, PDF & Email
Share

40 Responses to “What Is It Like to Talk Batty?”


  1. 1 1 Ted

    But the greatest advantage of the expected utility idea is that by multiplying expected value by an unknown function we can clothe the whole process in obscurity, and pretend more or less whatever we damn feel like is rational.

  2. 2 2 Keshav Srinivasan

    Steve, Nagel may be confused by the fact utility is often measured in dollars.

  3. 3 3 theobot1000

    Prof. L.- you write “Second, there’s a vast body of empirical work that’s largely compatible with this theory’s predictions.”

    Would you please point us to some of the really striking papers / books / or essays in this regard? I think your readers would be most interested in those that you, personally, find / found particularly compelling.

    @Keshav- Utility isn’t measured in dollars…which is the point of the post. What utility does is is rank outcomes according to some mapping, so what we are interested in isn’t $100 or $1000 but rather u(100) or u(10000).

    This leads naturally into more interesting conversations about “useful” or “tractable” choices for u(.), risk preferences, etc. etc.

  4. 4 4 nobody.really

    Cool! I’ve never had a strong grasp on utility theory.

    1. Does utility always have a positive correlation with value? I could imagine that some people would rather win $665 than win $666.

    2. On subjectivity:

    One of the most widespread misconceptions among non-economists is that these “utility” numbers represent some subjective measure of well-being…. All the relevant theorem says is that people act as if they assigned a number to each possible outcome … and then act as if their choices were based on these numbers, multiplied by probabilities a la Nagel (so that a 50% chance of $1000 is assigned a 1.5, making it inferior to a sure $100). The numbers, according to the theory, are different for different people….

    On what do people base the numbers that correspond to utility, if not on “subjective measure of well-being”? Can we think of an example that would distinguish between these two phenomena?

  5. 5 5 Ken B

    @Steve: I think this: “expected utility is a theory of optimization” skips too many steps. As I understand it (You will I’m sure correct me if I am wrong) utility theory is ultimately based on risk aversion, and generalized from there, sometimes all the way to cardinal and comparable utility. This is really pretty absrtract and indirect. It makes it easy for misconceptions to grow and flourish. A solid foundation makes it easier to dispell misconceptions. I’d like to see you take a stab at explaining it from the ground up.

    I expect this would answer nobody.really’s sensible request for clarification.

  6. 6 6 Ken B

    “1. Does utility always have a positive correlation with value? I could imagine that some people would rather win $665 than win $666. ”

    This is a devishly clever example. However I’d say that those people feel they are receiving *two* things, the money, and a signal. If you converted the amounts to another denomination, say two different annuities whose current values are 666 and 665, but don’t give the price, the effect would vanish. If that is the case, then the effect is not a failure of monotonicity, it is a failure to mention a usually irrelevant instrumental variable.

  7. 7 7 Doc Merlin

    @ nobody.really
    Utility /is/ value. The 665 versus 666 dollars in your example isn’t value its a comparison of two bundles of goods.
    In economics, value exists only in the mind!

    “On what do people base the numbers that correspond to utility, if not on “subjective measure of well-being”? Can we think of an example that would distinguish between these two phenomena?”

    Numbers do not correspond to utility. Utility is an ordinal measure not a cardinal one. For example, we can say that x is preferred to y but we can’t say that its preferred by 3 utility units, because there is no such thing.

  8. 8 8 Ken B

    Keshav: “Steve, Nagel may be confused by the fact utility is often measured in dollars.”

    theobot1000: @Keshav- Utility isn’t measured in dollars”

    It often is though. It is also often measured in milk or walnuts. Since utility is ultimately tied to which preference you’d display faced with different choices (since it is a ranking of choices) you can (under reasonable assumptions) convert it into pretty much any measure. But you cannot do it with simple linear functions as Nagel does.

  9. 9 9 JohnE

    @Ted

    I think you may have beaten Nagel’s ignorance-per-word quotient.

    @nobody.really

    1. Expected utility theory at its most basic does not assume that the utility function u is monotonic. Most applications however do add this as an aditional assumption. This is not very controversial.

    2. Economists don’t really think people have these numbers in their head when making decisions. What economists do think is that if someone satisfies the expected utility axioms, then they behave AS IF they do. However, if you accept the expected utility axioms, then it is easy for you to come up with these AS-IF numbers for yourself. How? Imagine a coin with a bias p (that is, the coin comes up heads p% of the time). Imagine you get $1,000,000 if the coin lands heads and $0 otherwise. To calculate u($1,000), ask yourself: “For what value of p would I be exactly indifferent between the coin toss and getting $1,000 for sure?” Call that p_1000. For any value x between 0 and 1,000,000, you can get a similar number p_x. Then u(x)=p_x. I’ll leave it to you to figure out if this does or does not represent a measure of your well-being.

  10. 10 10 JohnE

    @Ken B and Doc Merlin

    Expected utility is more than ordinal. It is actually cardinal. In expected utility theory, there is behavioral meaning to statements like “I prefer x over y just as much as I prefer y over z.”

  11. 11 11 Ken B

    @JohnE: If you read my remarks carefully you will discover two things:

    1. Nothing I said requires utility to be cardinal
    2. Nothing I said forbids utility to be cardinal

    I was very carefully evasive. Even when I state that utility is a ranking of choices I was careful not to state what kind of ranking. The only time I used the qualifier ‘expected’ was when quoting SL.

  12. 12 12 nobody.really

    Imagine a coin with a bias p (that is, the coin comes up heads p% of the time). Imagine you get $1,000,000 if the coin lands heads and $0 otherwise. To calculate u($1,000), ask yourself: “For what value of p would I be exactly indifferent between the coin toss and getting $1,000 for sure?” Call that p_1000. For any value x between 0 and 1,000,000, you can get a similar number p_x. Then u(x)=p_x. I’ll leave it to you to figure out if this does or does not represent a measure of your well-being.

    Cool; then I figure it does represent a measure of my well-being. And I’m still left wondering what leads Landsburg to the opposite conclusion.

  13. 13 13 JohnE

    @Ken B

    I was only replying to this quote of yours:

    “utility theory is ultimately based on risk aversion, and generalized from there, sometimes all the way to cardinal and comparable utility.”

    I assumed you were talking about expected utility theory since you had quoted Steve. Thus my correction since expected utility theory is not “generalized” to include cardinal utility. However if you meant to talk about utility theory more generally then the above quote of yours makes even less sense to me.

  14. 14 14 Ken B

    JohnE:
    Generalized is thw rong word for sure, as cardinal utility is of course actually a special case of the more general notion. S let me try again.

    I meant utility theory is built up from the base definition in terms of risk aversion. You can *define* utility in terms of risk aversion and lotteries. That is how I saw it done when I learnt it. Then you can add postulates — axioms — to that definition. Add the right ones and you get the kind of slice&dice cardinal utility some theories require. To get ‘expected utility’ out of sums and products you need to be able to slice&dice.

    In other words substitute specialize for its antonym generalize in my sentence above.

  15. 15 15 Roger Schlafly

    You are criticizing a book review, so it is not clear whether Nagel or Kahneman should be blamed for not making these terminological distinctions. Maybe the book review is just reflecting what the book says.

  16. 16 16 Steve Landsburg

    nobody.really:

    Cool; then I figure it does represent a measure of my well-being. And I’m still left wondering what leads Landsburg to the opposite conclusion.

    Maybe it does; maybe it doesn’t, but the theory certainly doesn’t require it to.

  17. 17 17 Rowan

    I was curious to know what you thought of Thinking, Fast & Slow (because I was pretty sure you would have read it), so I guess I’m grateful that there was a bad review of it! I’d love to see a post with more of your views on the book.

  18. 18 18 Ken B

    @nobody.really: it’s a theory about preferences, and how they shape behaviour. Whether those preferences reflect your well-being is another matter.

  19. 19 19 Leo

    But I thought what you’re supposed to do to deal with uncertainty is to transform into the equivalent martingale measure so that value is utility under that measure…

  20. 20 20 nobody.really

    @Ken B.

    This is a devishly clever example.

    Ha!

    To be more specific, I’d be interested in getting people’s thoughts:

    1. Does utility refect something subjective — that is, derived from the individual appraising the situation rather than from the situation itself? (Or, in a deterministic universe, does the distinction between the observer and the observed lose its relevance?)

    2. What distortions arise from characterizing the magnitude of utility as reflecting a forecast of a situation’s propensity to promote wellbeing?

  21. 21 21 Finesse Cool

    I don’t think that I’ve ever seen Steve Landsburg so disappointed.

    It may be worth remarking, though, that it’s typical for the laity to often use some nomenclature interchangeably (regardless of the academic discipline, or vocation), and inadvertently step on one of the cognoscenti’s nerves. In that way, I wouldn’t come down so harshly on someone who is obviously an intelligent man, but lacks a fundamental knowledge of economics terminology (and the definitions attached to said terminology).

    I’m not an economist by trade, either, and some of the more parochial vernacular tends to elude me, too.

    Why not shoot him an email, and apprise him of his blunder? lol.

    I’d love to see that round of intellectual fisticuffs.

    — Ness

  22. 22 22 Mike H

    Imagine you get $1,000,000 if the coin lands heads and $0 otherwise. To calculate u($1,000), ask yourself: “For what value of p would I be exactly indifferent between the coin toss and getting $1,000 for sure?”

    This doesn’t give u($1000). At best it gives p x u($1000000) + (1-p)*u($0), and I’m not even fully convinced of that.

  23. 23 23 Harold

    This point that rationality requires consistent behaviour, rather than, say, dollar maximising behaviour, is one I have picked up through these blogs. It seems obvious now, but obviously requires a bit of an intellectual leg-up for some people.

    I am a bit lost about the subjectivity argument. As utility is based on preference, it is subjective. This surely is the whole point of Steve’s correction. There is no objective way to asses which course any individual should take. I assume the problem phrase is “well-being”. I too fail to see why utility is not a subjective measure of well-being. Have I missed a point?

    “Second, there’s a vast body of empirical work that’s largely compatible with this theory’s predictions.” I think the key word here is “largely”. For simple choices with easily quantifiable outcomes, this is probably very large. For complex choices with difficult to asses outcomes over many years, it will be quite a bit smaller.

  24. 24 24 Low Budget Dave

    I think the only thing Nagel is missing is the utility of certainty. If you were selling fifty lottery tickets for a $100 prize, the average gambler will pay slightly more than $2 for a ticket. The average risk-averse person would pay slightly less.

  25. 25 25 Ken B

    “I too fail to see why utility is not a subjective measure of well-being. Have I missed a point?”

    Yes. :>

    It is not logically necessary that a utility maximizing choice enhances your well-being, or that you make choices in order to enhance your well-being. That seems likely behaviour for most people. It may actually be true for most or even all people. It isn’t a logical requirement of the theory. So arguments based on well-being will not touch the soundness of the theory.

    It also clearly does not apply to people who eat fried liver. Brrrr.

  26. 26 26 Alan Wexelblat

    Like Rowan I’m pleased to read that you thought Kahneman’s book was “excellent” and I hope you can take time to expound on it some more. I like his writing in general and am looking forward to reading this book.

    Also, I hope you took the time to write to The New Republic and ask them for a correction. Maybe they’ll ask you to do the next book review.

  27. 27 27 Harold

    Ken B. I see the point, and I think this raises some important issues. If well-being cannot be defined objectively it is virtually synonymous with utility. Maximising your utility must maximise your subjective well-being. If there is some objective well-being, then utility need not maximise this. For example, if you thought you were better off dead, then killing yourself maximises your subjective well-being, although this would not be most peoples idea of well-being.

    I agree in principle that if utility and well-being are not the same thing, then there is no requirement THAT maximising utility will maximise well-being. I am sure there are reasonable definitions of the two things that are different, although at the moment I do not see it.

    If there is such a difference, then maximising utility does not maximise well-being. Markets maximise utility rather than well-being. A market solution is only best if utility is more valuable than well-being. As I am not sure what this definition of well-being is, I am not sure that we should value utility more than well-being. As you and Steve seem to think that there is a distinction, can you explain why utility should be valued more than well-being, or is it that there is simply no mechanism for maximising well-being, so we must settle for utility instead?

  28. 28 28 Ken B

    Harold wrote “If there is such a difference, then maximising utility does not maximise well-being. ”

    If we are talking about SUBJECTIVE measures of well-being this is certainly true. Mothers make sacrifices for their children. And if you believe our president everything he does he does for us.

    If we are talking about OBJECTIVE measures of well-being this is certainly true. People make mistakes, act when stoned, believe what our president tells them. All can lead to poor decisions.

  29. 29 29 Harold

    Ken B. What is meant by well-being? It is a very imprecise term. How do we compare the well-being of a blissfully happy hermit in a cave and a miserable millionaire on a yacht? We could define “objective well-being” as a sum of financial, health, environmental and other factors – there is no subjective element. Every person would score the same “objective well-being” in identical situations. So the parent making sacrifices for their children reduces their own “objective well-being”, whilst maximising their utility. However, on this scale, the happy hermit has little well-being, which I don’t think most people would agree with.

    So we must include some subjective element – the sum of material position and “happiness” – which makes it about the same as utility.

  30. 30 30 Ken B

    Harold, once you allow a subjective measure of well-being then you must accept it. You cannot sensibly say, I do not believe your SUBJECTIVE measure of well-being, because then you are asking for an OBJECTIVE standard. So if I say that I donating a kidney or a dollar reduces my well-being but I will do it for the benefit of my child you cannot come along and say, ah but that he made the choice proves it enhanced his well-being. But it did maximize my utility.

    Of course there are subjective feelings of well-being, and of course people like to enhance theirs, so of course it affects their utility functions. But you don’t need this to talk about utility. It might though limit the applicability of the utility idea.

  31. 31 31 Harold

    Ken B – I finally see it. Thanks

  32. 32 32 nobody.really

    Of course there are subjective feelings of well-being, and of course people like to enhance theirs, so of course it affects their utility functions. But you don’t need this to talk about utility. It might though limit the applicability of the utility idea.

    Yes, characterizing utility as related to wellbeing might limit the applicability of the utility idea. But can anyone think of any actual circumstance in which it does?

    I want to communicate. I perceive a trade-off between rigor and appealing to common experience.

    I sense experts in technical fields often don’t perceive this trade-off. When confronted with people who don’t understand what was said, an expert may be prone to respond by providing explanations with ever greater rigor and detail. Often these explanations fail to enhance communication.

    Here we have a concept of “utility” which von Neumann and Morgenstern used in a particular conceptual model. So when people ask me, What do you mean by “utility?” how should I respond?

    A. I could direct them to consult the papers published by von Neumann and Morgenstern. No one could fault me for offering an answer that lacked rigor. But people might well fault me failing to communication; I expect that few people will actually follow my suggestion. (*I’ve* never read any von Neumann or Morgenstern.)

    B. I could say, “Each person acts as if she has assigned a value to each alternative that corresponds with that alternative’s likelihood of promoting wellbeing.” I sense this answer is more likely to communicate something to people. And yes, the idea it communicates may differ from the idea von Neumann would have communicated if he were present. But, on balance, I prefer the successful communication of an imperfectly-expressed idea to the unsuccessful communication of a perfectly-expressed idea.

    Of course, my assessment of this balance depends upon how much distortion my expression of an idea interjects into the discussion. So I again return to the question, what distortion arises from describing utility in terms of subjective wellbeing? Can anyone think of an example that would distinguish between “utility” and a subjective measure of anticipated wellbeing?

  33. 33 33 Harold

    For me, Ken B’s example of organ donation did the trick. I can conceive of an individual believing that donating an organ would reduce their own well-being. They must feel that this loss is balanced by something, or they would not choose to donate. This something is an increase in utility. Being without an organ would widely be considered as reducing one’s well-being.

    A parent making a sacrifice is similar. I can anticipate that doing without something will reduce my expected well-being. In the absence of children, I will not choose to do without. With children, I may chose to do without, even though I have already accepted that to do so will reduce my well-being. This is balanced by an increase in utility.

    Well-being is an ill defined term. I could say that my well being ir reduced by the sacrifice without children, but with children my well-being is not reduced because of the increase in my satisfaction. But having thought about it, I now believe that a distinction between utility and well-being in reasonable.

  34. 34 34 Colin

    I’m a little late to this party, so, don’t know of anyone will read this, but, nobody.really:

    The main distinguishing feature of a utility function is that, given two options x and y, u(x)>u(y) if and only if x is preferred to y. What is meant by preferred? It means that, given a budget constraint, you’ll choose the option x’ such that x’ is the most preferred affordable option. So, the question really isn’t about utility, but about preferences. It’s easy to fall into the trap of thinking “she chooses x because it has higher utility”, when really the causal arrow is going the other way; it has higher utility because she chooses x.

    So the idea here is that, nothing about this is supposed to say WHY someone prefers one outcome to another. You have a preference ordering over every possible outcome (this is one of the assumptions of utility theory—that you have a preference ordering over every single pair of outcomes) and your reasons for preferring x to y are totally your business. Suppose your preferences were such that, given a choice between two options, you always chose the one that brought you the least well-being. We could take that preference ordering and construct a utility function which represented your preferences, and you would behave as though you are maximizing that utility function, which would also be minimizing your well-being. That’s why economists try to draw a sharp distinction between utility and well-being. Well-being is something more abstract and philosophical whole utility is very mechanical. Saying that people prefer outcomes which bring them higher well-being, while plausible, is only one possible way to order preferences.

  35. 35 35 Chicago Methods

    Steve,

    Please tell me if this is a better description of Rational Expectations: People tend to make expectations about future events. People tend to plan for these expectations of future events. If people are perfectly rational, then these expectations will come to fruition. Furthermore, given enough time to learn, people will start to trend towards a rational expectation.

    I go to UMN, Twin Cities and my Grad student/teacher (who I think is studying R.E. right now, because she fawns over it when briefly describing it) says that many economists think that certain recessions were caused simply because enough people expected them to occur.

  36. 36 36 Steve Landsburg

    Chicago Methods:

    People tend to make expectations about future events. People tend to plan for these expectations of future events. If people are perfectly rational, then these expectations will come to fruition.

    This is, I think, a very poor description.

    Better to say something like this: When people expect the probability of a future event to be p, then they behave in a way that causes the probability of that event to be f(p), where f is a function that can be derived from the model at hand. The rational expectations hypothesis is that the actual expectation p is a fixed point of the function f.

    Example: The more likely I think a traffic jam will be, the less likely I am to drive, and therefore the less likely there is to be a traffic jam. A more detailed version of that model might predict, for example, that when I and my neighbors expect the probability of a traffic jam to be p, we behave in a way that makes the actual probability equal to 1-p^2. In that model, the rational expectations equilibrium occurs when the expected probabity is about .618. (Because 1-.618^2 = .618)

  37. 37 37 Chicago Methods

    Thanks Steve. It makes the muddy waters a bit more clear. I have to admit it’s not something which seriously holds my interest at this moment. I’m more interested in Frank Ramsey right now. Though I’m disappointed you didn’t use bananas in your example. :)

    Granted I’m only in Intermediate Macro at the moment, but we just covered the first and second welfare theorem a little more deeply. I’m starting to see why, in a perfect world with perfectly competitive markets, all you would need is a price to get everything right. The marginal rate of transformation needs to be equal to the marginal rate of substitution of leisure for consumption. Both of these are equal to the price.

    In any case, kudos and thanks for responding.

  38. 38 38 Aodhan

    ‘Can anyone think of an example that would distinguish between “utility” and a subjective measure of “anticipated wellbeing”?’

    Yes.

    Suppose I am more concerned for the well-being of another person than for my own well-being; and that improving their well-being can also be achieved at the expense of impairing my own; and that I do actually act in accordance with these comparative valuations, and make the required personal sacrifice to benefit this other person.

    Here, benefitting the other person has greater utility to me than benefitting myself. My actions bear this out.

    However, benefitting myself also entails lesser anticipated well-being for myself. I know what I am getting into.

    Thus, utility and anticipated well-being move in opposite ordinal directions. Hence, they cannot be the same thing.

    QED

    P.S. It’s your lucky day! This is a two-for-one offer! Here’s comes a supplementary example.

    Question: Would you prefer to live in an illusory world that satisfied your every need (a.k.a. The Matrix) or in the real world that didn’t? (This is a version of Robert Nozick’s “experience machine”.)

    Not everyone would choose the first option. People value contact with reality (ascribe it greater utility) more than they value anticipated well-being.

    QED

  39. 39 39 Aodhan

    Erratum:

    “and that improving their well-being can ONLY be achieved at the expense of impairing my own”

  40. 40 40 George Turner

    This has been an interesting thread, and I have a trivial scenario to add.

    I generally prefer to receive $500 over receiving just ten cents, unless the $500 is coming from the welfare office while the ten cents is the very last dime my arch nemesis possesses – payable to me. The larger sum, though more useful to me, is a bitter pill to swallow, whereas the precious dime clawed from the pocket of my enemy is sweet victory.

  1. 1 JimSwift.net » Blog Archive » Tuesday Links
  2. 2 Some Links

Leave a Reply