Bosonic Ka-Ching Theory

George Johnson of the New York Times writes that:

In a saner world, where science and the law meshed more precisely, a case like Firstenberg v. Monribot would have been dead on arrival in court.

Arthur Firstenberg, you see, is suing his neighbor, Raphaela Monribot, for bombarding him with photons from her iPhone, her WiFi connection, her dimmer switches and her fluorescent bulbs (all as side effects of her ordinary use of these devices). Mr. Firstenberg believes (or claims to believe) that said photons are damaging his health — a belief with essentially no scientific basis.

Mr. Firstenberg requests $1.43 million in damages, so perhaps we should think of this as an exercise in bosonic “ka-ching” theory. The case has gone on for five years, and might be headed to the New Mexico Supreme Court. Estimated court costs so far exceed a quarter of a million.

It would be easy — in fact, Mr. Johnson of the Times finds it extremely easy — to see this case as nothing but a minor tragedy with comic overtones. But the issues it raises are deeper than that.

First, this case is about as good as it gets if you’re looking for a reductio ad absurdum to libertarian dogma about the absolute right to control one’s own body. If we accepted that dogma, Mr. Firstenberg would have an excellent case. More than one economist has tried to refute the libertarian position by concocting hypothetical lawsuits over “penetration by photons”. Thanks to Mr. Firstenberg, we no longer have to resort to the hypothetical.

What, then, is the right standard for an act to be considered tortious and/or for it to be appropriately discouraged by public policy? If libertarian dogma (and, I suspect, any other brand of deontological dogma) leads places where nobody wants to go, then the alternative is some form of consequentialism. I’m not allowed to kick you in the shins. That’s not because you have some inalienable right to control your shins; it’s because kicks in the shin cause material harm, with, in most cases, relatively little in the way of offsetting benefits.

But once you’ve gone down the consequentialist road, you’re inevitably faced with the question of what counts as material damage, and in particular whether psychic damage is or is not material. Let’s suppose that Mr. Firstenberg believes that Ms. Monribot’s photons are making him sick, and that he suffers genuine distress as a result of this false belief. Is that distress sufficient basis for a lawsuit?

Many people — including me — have a strong gut feeling that the answer should be no. This gut feeling gains strength from the fact that it is consistent with a lot of other gut feelings. If Mrs. Grundy is offended by the very existence of her neighbor’s porn collection, should she have the basis for a lawsuit? Is her distress, in and of itself, a reason for the law to discourage porn collections? My gut says no again. If you feel genuine distress over strip mining in an Alaskan wilderness that you never plan to visit, does your distress constitute a reason for the law to discourage that strip mining? My gut still says no. That’s a good consistency check.

On the other hand, it’s not entirely easy to justify those gut feelings. Given a choice between having a neighbor with a porn collection and getting kicked in the shins, Mrs. Grundy might well prefer the latter. Why, then, should the law care more about what bothers her less? Part of the answer, I think, is that, compared to physical harm, mental distress is easier to conjure up, and we don’t want to encourage conjuring. But I’m not convinced that’s a complete answer.

Besides, there are other cases where many people’s guts (or at least my gut) go the other way. What about installing a hidden camera in a stranger’s bedroom, while taking effective precautions that said camera will never be found? One wants to say that this is somehow more of a legitimate legal concern than bombarding a person with harmless photons. But the counterargument is that Mr. Firstenberg is (or at least might be) suffering genuine distress from those harmless photons, while the unknowing victim of the voyeur suffers none at all. (A partial countercounterargument is that while no particular victim is aware of the hidden cameras, we are all aware that we might be victims, and the more cameras there are, the more we’re all distressed by that.)

An even harder case is the voyeur who installs a hidden camera without taking precautions to make sure it’s not found. In this case, the victim is very likely to end up feeling emotional distress, and many people’s guts (including mine) say that the law should care about that. But how is that distress different from that of Mr. Firstenberg, or of Mrs. Grundy, or of the anti-strip-mining activist?

All of this is to say that these issues are both hard and important, I don’t know how to settle them, and they merit discussion. (I hope some of that discussion is about to take place right here.) Certainly they’ve been discussed to death in the legal literature, and particularly in the law-and-economics literature, but I’m not aware of anyone in that tradition who thinks they’ve been settled.

Bottom line: Mr. Firstenberg’s lawsuit “obviously” has no merit. But articulating why it has no merit leads to a thicket of difficult but crucial questions in legal theory, public policy, economic analysis, and moral philosophy. By calling attention to those questions, the Firstenberg lawsuit might have some social value after all.

Share/Save

55 Responses to “Bosonic Ka-Ching Theory”


  1. 1 1 Mike H

    Someone I know went to hospital recently. Instead of an anaesthetic that eliminates pain, she was given an anaesthetic that eliminated memory formation.

    Then, during the procedure itself, she was in extreme pain, as you would expect. However, afterwards, there was no memory of it, nor (apparently) any ill effects afterwards.

    The psychological distress from the physical pain was completely temporary.

    Some questions for readers to think about: Was the distress real? Was this a real anaesthetic? Would you want it used on you?

  2. 2 2 Nawaaz

    My “gut” says some answers can be found in the verifiability of externalities. The cost of getting kicked in the shin is verifiable by a 3rd party e.g. like a doctor or the scientific research into pain. Mrs. Grundy’s costs are much more difficult to verify and thus finds it much easier to masquerade as a victim and reap unfair remunerations.

  3. 3 3 Jonatan

    I think that your examples with the camera and the strip mine have consequences that are not just psychic. Or more precisely, potential consequences.

    Let’s assume that the camera is only viewed by one person, and he never talks about it or meets me etc. If I find out about the camera this still has real, tangible costs to me. Since I don’t know that the feed is only ever seen by one person, I will take different actions in my life. For instance, I may avoid getting into politics.

    I have a similar objection to the Alaskan strip mine example. I don’t know with certainty that I will never visit that spot. It might turn out that I never do, but I am still rational to object to this limitation on my potential future enjoyment. Additionally, I may consider other people who live closer and quite certainly would’ve gone there, and are then deprived of the pleasure, and I may consider more far reaching effects on wildlife.

    I believe that the law is there to serve persons, and not some abstract principle. Thus, I don’t think we should a priori reject the protection of someone from pure psychic harm. However, there are two good objections to such protection. One of them is the one you put forward, that it’s impossible to measure and could be faked. A second is that the suffering of psychic harm may easily change over time. For instance, let’s say that a group of people suffer great psychic harm from same-sex marriages. If same-sex marriage is banned, then they would not suffer this harm, but the same-sex couples will suffer the harm of not getting to mary. If instead same-sex marriage is not banned, the attitude of the group of people who do not like same-sex marriage might change over time (influenced by the existance of the married couples), and eventually reach a state where they do not suffer psychic harm from it at all. Then this is a better outcome.

  4. 4 4 Ben

    Deontological and consequentialist theories are not different in principle, only in detail. A consequentialist theory is a deontological theory where the duty is to act so as to produce certain consequences. A deontological theory is a consequentialist theory where the only consequence that counts is the performance or otherwise of the duty. The differences between them are not in their mathematical form but in the detail of what is and is not considered to be the good.

    That out of the way, it’s well established that consequentialist theories are subject to utility monster problem – i.e. consequentialism doesn’t work in theory. The practical problems are even worse – problem of knowledge, calculation problem, agency problems…

    But since deontological theories are of the same form they suffer from the same problems. Conclusion: There is no moral theory which works all the time, and therefore there is no moral theory which does not admit exceptions.

    Deontological theories win for me essentially because their conception of the good is congruent with what it is actually possible to know and do, and call neither for Olympian knowledge, nor angelic virtue.

    In this case, the plaintiff is clearly a kook, and we can’t allow society to be held to ransom by every kook with mad ideas, no matter how distressed he is. He has a duty to reasonably accommodate his neighbours, and since he is the only one who is afraid of EMFs he is the one who will have to move house.

    Is this a conclusion which is well founded in theory? Can it be deduced from first principles? No. And that is exactly what is good about it.

  5. 5 5 Biopolitical

    Mental harm can sometimes be estimated by observing people’s behavior. In the case of Mr. Firstenberg, what steps does he take at home and elsewhere to avoid being harmed by wifi? In the case of Mrs. Grundy, how much is she willing to pay (in terms of money or decreased privacy) to live in a porn-free neighborhood?

    Laws and customs that are more decentralized make it less costly for each person to settle in a place or associate with people that fit his preferences, be they “material” or “mental”. Mr. Firstenberg could settle in a bunker-like condo where electromagnetic “aggression” is forbidden, and Mrs. Grundy could live in a puritanical commune. Some current laws that preclude such decentralization could perhaps be abolished. In this regard, having many small countries instead of a few large ones, more autonomy for municipalities, etc., would alleviate the problem.

    The cases of long-distance harm seem unsolvable to me. Mr. Lettuce is mentally distressed by the fact that bull-fighting is still being practiced in a planet sitting on another galaxy. Maybe he has a point in asking for a universal ban on bull-fighting, maybe not. Estimating mental harm by hedonic pricing or similar methods has limitations. Letting him pay the bull-fighters to stop their activity also has limitations, especially given how expensive intergalactic bank transfers are. So I don’t know how to settle that.

  6. 6 6 John

    To be fair, it is merely one form of libertarian legal thought, mostly supported by Rothbardians, that believes in any kind of absolute right like that. I think if David Friedman were judging this case, he wouldn’t have a problem throwing it out.

  7. 7 7 roystgnr

    There’s also the smart-ass variety of libertarianism:

    “I recognize that my photons are intruding upon your space, and I *insist* on paying all the damages which you can prove in court!”

    So there won’t be a million dollar payoff, but perhaps the extra milliwatt of air conditioning that the plaintiff requires might be reimbursed?

    On second thought, I think it’s premature to call this a “smart-ass” answer. It’s the only sane answer. If the defendant had a microwave transmitter emitting a couple gigawatts instead of just a couple watts, we’d be comfortable with a civil judgement awarded to her unfortunate neighbors’ next-of-kin, right? Why? Because although it’s just the same photons, and libertarian dogma is correct to hold your neighbors responsible for what photons they hit you with. There is no sharp dividing line between “that’s too many photons so it’s wrong” and “that’s not quite too many photons so it’s okay”; just a monotonically increasing amount of damage caused, which is in the limit so infinitesimal that it’s not worth knocking on your neighbor’s door to bother them about, much less worth taking them to court.

  8. 8 8 James Kahn

    Maybe the old “veil of ignorance” is a useful concept here? Without knowing which end of a lawsuit we are likely to be on, I would think we would limit such suits to tangible damage independently verifiable by a third party, with limited exceptions for things like the voyeur with the video camera, and damage to reputation–acts that would pretty universally be regarded as damaging, even if it’s not tangible.

  9. 9 9 James Kahn

    And I meant to add: There’s also the Coase theorem. Especially in the case discussed, why should courts be involved at all? Who pays whom is a matter of what property rights comprise. In this case, Mr. Firstenberg can pay his neighbor not to emit photons. I suspect something far less than the $millions in the lawsuit would achieve the desired end.

  10. 10 10 Roger

    I do not see the difficulty. Any reasonable fact-finding court will determine that she is not damaged, except that she does not like what someone else is doing. Libertarians do not believe in compensating such people, AFAIK. They believe we have the freedom to do things that might possibly offend others, without causing physical harm.

  11. 11 11 The Original CC

    Mike H: I assume you’re talking about a hypothetical anesthetic, right? I’ve thought about this before too. And I’ve had the scary thought that maybe this is how general anesthetic really works. (AFAIK, it’s not.)

    In real life, wouldn’t the lack of memory formation mean that you essentially only experience a moment of pain, since each moment of surgery appears to be the first time you’re experiencing pain? IOW, you don’t remember that you’ve been in pain for the last n minutes.

  12. 12 12 Rowan

    Mike H: I believe you’re referring to a hypothetical anesthetic, yes? But to inject a little reality into your conjecture (at the risk of getting off-track), there are actually two memory systems: declarative/explicit and procedural/implicit. See here for more: http://www.human-memory.net/types_declarative.html (or read Mindsight by Dan Siegel)

    Declarative/explicit is what we usually think of exclusively as our “memory”, but the procedural/implicit is equally important. So in answer to your question, I would unequivocally turn down that anesthetic, because while it may disable explicit memory, the implicit memories of that pain would still be formed, most likely leading to ill effects.

  13. 13 13 Rowan

    In reference to the porn collection/hidden camera hypotheticals, there is a real-life example: victims of child pornography seeking restitution from those found in possession of their images.

    The questions presented to the US Supreme Court boiled down to: should someone completely unrelated to and unknown to the victim, who was found to possess images of her abuse, be liable for compensating her for the psychological injury she suffers from knowing people are distributing and viewing those images? And if so, is he liable for the full amount, or some portion thereof? What portion?

    Full write-up here on the court’s decision from April 2014: https://verdict.justia.com/2014/08/06/supreme-courts-approach-restitution-victims-child-pornography-possession

  14. 14 14 Steve Landsburg

    Roger:

    I do not see the difficulty.

    They believe we have the freedom to do things that might possibly offend others, without causing physical harm.

    The difficulty is precisely that this belief fails to square with nearly anyone’s instincts regarding the voyeur cam.

  15. 15 15 Julien Couvreur

    Although I recognize the problem at the limit with deontological approach (on arguably small problems/conflicts), it has the advantage of being actionable by providing encapsulated/local rules for what is allowed.
    So you could rule in favor of the photon plaintiff and people would build walls to prevent photons from shinning from hitting their neighbors.

    In comparison, a utilitarian/consequentialist would have to consider possible impact and evaluate subjective harm or “utility” for people near and far (as in the porn collection example) before they act. If you require consideration of people who aren’t born yet (or consulting them), then action becomes almost impossible.
    Similarly, if conflicts arise after an action, then the scope of people that should be involved is unbounded. It is no longer a conflict between specific people, but between all.

    I would suggest that consequentialists may prefer the consequences of a deontological approach (ie. action is possible) ;-)

  16. 16 16 Roger

    The voyeur cam is invasion of privacy. That is a widely recognized right, and some invasions of privacy do lead to some tangible. There are legitimate disagreements about this right, but no one accepts the rights claimed by the man in this lawsuit.

  17. 17 17 Roger

    Here is a case of someone who bought the copyright to an unflattering picture of himself, in order to suppress it. You could argue that he is not really harmed by the picture.
    http://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/05/11/copyright-as-censorship-katz-v-chevaldina/

  18. 18 18 Jonathan Kariv

    Thinking about how things went down the last time Steve mused about this thought experiment I’m hoping we don’t get a wave of. “Did Steve just compare a photon entering your body to a hidden camera?!?!?! Peeping Tom apologist!!!”

    More seriously I have no idea how to resolve the issue of what kinds of harm we should think it’s OK to cause each other. But some ideas.

    1. Can we educate or medicate this psychic harm away? It’s not impossible that the whole thing could be dealt with by someone authoritative looking explaining to him that the photons can’t cause him any physical harm (at a cost of much less than the legal fees).

    2. Of course on the assumption that 1 doesn’t work and that Mr Johnson doesn’t think he needs anti-anxiety meds. That leads us back to square one. Same goes for it might be cheaper for him to move.

    3. Is Firstenberg using similar devices? If so then presumably Monribot can just counter-sue for the same amount and the whole thing is an instant wash?

    4. Well I suppose that if Firstenberg feels genuine mental distress, Monribot does not feel any such distress and Firstenberg KNOWS that Monribot feels no distress from his usage of bulbs (and can prove it, say with a contract, Monribot signed beforehand) then maybe it isn’t a wash?

    5. My first thought is neighborhood contracts which one agrees to when moving in. Of course this will fail spectacularly as soon as someone wants to argue about something the contract fails to specify (like photons).

  19. 19 19 khodge

    Fun fact: a Health Law Attorneys ad popped up with this post. Don at CafeHayek a while back was furious when an ad popped up recruiting Homeland Security Agents on one of his posts. Of course, most of these are driven by cookies so I likely would never have seen his ad, whereas he, no doubt, had been researching some such topic at the time.

    This puzzle has a more explicit problem: diffuse costs vs focused benefits. Starting with the most obvious, how does Firstenberg even know that Monribot has WiFi, and that she is the only source of those WiFi photons, without having been exposed to someone else’s photons? How is Firstenberg avoiding the rest of the photons that are bombarding him (say, for instance, from the sun)?

    The question is quite old. I remember seeing a comic from a century or so ago where an old lady in a rocking chair was staring at a wall socket imagining the electricity leaking into the air.

  20. 20 20 Alan Gunn

    Quite a long time ago, before judges decided that going to law school made them smart enough to solve society’s problems, the common law dealt with questions like these by looking at customary behavior. It isn’t customary to go about kicking people for no reason, so doing that had legal consequences. It was customary to do things that bombarded people with photons, at least during the day, so that was OK. If, because you were extremely sensitive, you were hurt by people’s doing ordinary things, that was your tough luck (an example being a case ín which someone subject to seizures triggered by hearing loud noises tried and failed to shut down all the church bells in the town). Maybe not a perfect system, but probably as good as anything academics could devise. Like markets in that respect.

  21. 21 21 Martin-2

    Julien Couvreur: Utilitarians believe in maximizing *expected* utility. Lack of perfect information is a shame but it doesn’t prohibit action.

  22. 22 22 Henri Hein

    Mike H & The Original CC:

    A memory-impairing anesthetic would not work. Anesthesia is as much for the surgeon as it is for the patient, if not more so. A patient in pain moves around too much, which makes the surgeon’s job difficult.

  23. 23 23 Henri Hein

    Ben:

    Utilitarianism and Consequentialism is not the same. Consequentialism just means, roughly, that results matter. Utilitarianism says that you can measure the utility of a proposal, not just on each individual, but even across individuals, in a way that can be summed, so that you can maximize this sum. That is a much stronger claim.

  24. 24 24 Ben

    @Henri Hein, not all forms of utilitarianism require that utilities can be summed. For example in Rawls’ theory only a partial ordering is required.

    Also, many procedures now are done on conscious patients, using a spinal block to prevent movement and eliminate pain, and sedatives to reduce anxiety. This considerably improves recovery time and reduces side-effects over halide anaesthetics.

  25. 25 25 Bennett Haselton

    As an aside, if the “estimated court costs so far exceed a quarter of a million,” that’s still mostly the fault of the legal system, not to be blamed on the lunatic plaintiff. In most scientific fields, for example, if 99 out of 100 experts agree that someone’s crackpot theory fails an empirical test, then the crackpot will be dismissed before they consume a quarter million dollars’ worth of resources. That’s because those scientific fields care about getting at what is objectively the best answer to a given question. The legal system cares more about following their rituals, even at the expense of getting at the objectively best answer, which is how this crackpot got this far.

    (And anyway, as Landsburg would be quick to point out in other contexts, the quarter-million is most likely the total of the legal bills accumulated so far, i.e. the cash transfer to lawyers, judges, and other parties, not the true “cost” to society. In terms of scarce resources being consumed, this just takes lawyers away from other tasks, but who knows if those other tasks would be productive or destructive.)

  26. 26 26 Bob Murphy

    Steve, here is how I summarized your post with the constraints of Twitter.

  27. 27 27 Harold

    Excellent topic. It would be interesting if “photon assault” were actually being tested. However, unfortunately that is not quite what is happening here because Mr. Firstenberg is suing for actual damage to his health, not because his body has been assaulted by photons without his permission.

    In many regards this is not a failure of the system, but a triumph. The courts heard the evidence and decided in light of the science that there was no damage, so the case was dismissed. This was upheld at appeal. The correct result.

    If a person believes he has been damaged, there is no reason why he should not at his own cost be allowed to test the evidence in a court of law. The failure here is that this was not at his own cost:

    “Court costs, not counting lawyers’ fees, had come to almost $85,000. Because of what the court described as Mr. Firstenberg’s “inability to pay,” the bill went instead to Ms. Monribot’s landlord’s insurance company.”

    There lies the fault. If someone wants to pursue a case beyond reasonable limits they should not necessarily be prevented from doing so. If they want someone else to pick up the tab then there should be a test of reasonableness. There was a test of sorts. “Showing skepticism from the start, District Judge Sarah Singleton denied Mr. Firstenberg’s request for a preliminary injunction, ruling that he was “unlikely to prevail on the issue of causation. The judge also denied Ms. Monribot’s motion to dismiss the case entirely, calling instead for an evidentiary hearing to consider “in depth proof and argument on the validity of both sides’ experts.””

    I guess a libertarian would hold that under no circumstances should anybody other the plaintiff be awarded costs in a failed case, so if Firstenberg could not pay for the court time he would have got no further than the front door.

    Since the system is not libertarian, the District Judge is effectively being asked to rule on the strength of the evidence of harm from stray photons. Whilst calling for an evidentiary hearing would be correct in many cases, in this case it was arguably wrong because the evidence is so overwhelming that even a district judge could make a sufficiently informed decision.

    This leave the issue of psychological damage as a different question, and a very interesting one.

    Say my neighbour takes to strolling up my drive. He does no damage, but I don’t like it. I think the libertarian position is that I am entitled to use force against him to get him to shift, but only sufficient force to get him to do so. I do not need to demonstrate any sort of damage, and presumably I have no claim as there are no damages.

    If he throws stuff over the fence onto my property, I am surely entitled to throw it back, possibly bill him for the work needed to throw it back and any actual damage caused. But is there any claim for bad feeling about the presence of his stuff?

    “What about installing a hidden camera in a stranger’s bedroom…?” Obviously if you put it in their bedroom, then you have violated their property. This in one sense is the same as throwing your stuff over the fence. They can throw your camera back and sue for any actual damage, but that is hardly the point. Do you have a claim for bad feeling about the fact that he gained visual access to your bedroom?

    If you happen to glance at your neighbour’s bathroom window when they emerge from a shower, they have no claim against you. They should have pulled the curtains. But we do have “peeping Tom” rules that seek to prevent deliberately looking into other’s property even if you do not trespass. I think a strict libertarian approach would not disallow such activity. You could peer into your neighbour’s bathroom as much as you want as long as you don’t trespass.

    If they close their curtain, and you then open the curtain with a stick, you will have caused no damage. Do they have any claim if you deliberately peer into the opened curtain?

    Regarding the anaesthetic
    “Would you want it used on you?” No, but I might not mind it having being used on the person of the same name who existed yesterday.

  28. 28 28 iceman

    Ben #4 – “deontological theories…suffer from the same problems”. I don’t see this and you don’t elaborate. Certainly there’s no utility monster problem? I don’t even see how they’re “of the same form”; one says e.g. I have no right to push someone in front of a trolley even to save 5 other people, the other says this is morally permissible and perhaps even required. Seems about as different as can be.

  29. 29 29 Al V.

    My gut reaction is that Mr. Firstenberg should have to demonstrate that he took all reasonable measures to protect himself from the radiation. Did he build a wall between the houses? Did he insulate his walls with lead shielding? Did he construct and wear a tinfoil hat? If not, I don’t see how he has grounds for a suit.

  30. 30 30 Jamie Whyte

    When people are willing to pay to avoid injuries, they are not paying to avoid the distress that thinking about the injuries causes them. I would pay about $1,000 to not think I have a life-threatening cancer but I would pay at least $1,000,000 not have the cancer. Fearing cancer and having cancer are very different problems.

    From this observation, the challenge of the Firstenberg case starts to fade. It tells us not only THAT Firstenberg’s case should not go to trial, but WHY it should not. And it tells us why SOME such cases are harder.

  31. 31 31 Harold

    #30 Firstenberg’s case relied on his belief that the photons actually caused his very real health problems. I don’t think the existence of the health problems was questioned. So a comparable case would be the plaintiff suing someone because they believed adding substances to the water (say) caused their cancer. If the substance was a known carcinogen, they would have a strong case. If the substance was believed to be safe they would have a weaker case. How could one tell if their case had merit without hearing the evidence?

  32. 32 32 Maurizio

    “The difficulty is precisely that this belief fails to square with nearly anyone’s instincts regarding the voyeur cam.”

    Could someone please spell out what exactly is the problem with the libertarian view?

    How does someone install a voyeur camera without breaking and entering, which is a violation of property rights? Libertarianism prescribe they should be punished for that. Which coincides with your gut feeling.

    If, on the other hand, I install a camera without the need to break and enter, (e.g. I watch you from my window), this means there is no invasion involved; and again this coincides with the gut feeling, that there is no violation of rights involved. If you didn’t want to be watched, you had better closed the windows.

    So what exactly is the problem here?

  33. 33 33 Harold

    #32. I think it is the feeling that to break in an install a camera is much worse than to break in and put a book on a shelf, even if the actual amount of physical damage is the same.

    The US does have “peeping Tom” regulations – it is an offence to “peep secretly into any room occupied by another person”. Also “any person who secretly or surreptitiously peeps underneath or through the clothing being worn by another person, through the use of a mirror or other device, for the purpose of viewing the body of, or the undergarments worn by, that other person without their consent shall be guilty of a Class 1 misdemeanor.” It becomes a felony if you take a picture.

    From a libertarian position, I presume there is no crime here. If you don’t want pervs. getting off looking at your panties, then don’t wear a dress.

    Nearly everyone’s -or at least a great many people’s- instincts informs them that there is some offence in this sort of behaviour, and therein lies the problem with the libertarian view.

  34. 34 34 Harold

    Replying to myself – the quoted law is actually much vaguer than adopted by many US states. In New York and others the perpetrator must use a camera – merely looking is not an offence, but in Missouri just looking is an offence.

  35. 35 35 Maurizio

    #33

    “I think it is the feeling that to break in an install a camera is much worse than to break in and put a book on a shelf, even if the actual amount of physical damage is the same.”

    Thanks Harold.

    Where to begin… the libertarian view does account for the difference in your gut feeling. Why do you assume it does not?

    Let’s start from scratch. According to the libertarian view, if you commit an act of invasion, you are held responsible for the act itself and all its consequences. Always remember the emblematic case: if you steal a man’s horse, when his life depends on it, you are not responsible just for theft, but for murder also.

    So of course, if you break and enter, and you place something that causes any kind of damage (physical or psychological), you are responsible for that. If you place a book on the shelf, the damage will be inferior, so you’ll just be responsible for the damage to the door. (assuming of course that by entering you did not scare to death the owner of the house, in which case you’d be responsible for much, much more.)

    Now, of course, the issue arises of how to quantify such psychological damage, but this is an entirely different issue, and is a problem for both libertarian and non-libertarian theories. I will not show how the libertarian theory solves this problem, because this is besides the point. The important point is that, once again, I don’t see how libertarian theory goes against your gut feeling, or what is counter-intuitive about it.

  36. 36 36 Keshav Srinivasan

    Mauricio, suppose you steal a dollar from someone, and they become so depressed from this petty theft that they commit suicide. Are you saying that you’re responsible for murder in such a case according to libertarianism?

  37. 37 37 Maurizio

    #36

    Hi Keshav, As I wrote in the last paragraph, the issue of how to quantify psychological damages is a problem shared by both libertarian and non-libertarian theories, so we should leave it aside, because here we are discussing problems _exclusive_ to libertarianism.

    Anyway, I would reply two things:

    It seems you have been able to produce an implausible conclusion out of libertarian theory. But the only reason you have been able to do that is because you made some implausible assumptions in the first place. (If you assume that my grandpa is a truck, you can prove that my grandpa has wheels.) The implausible assumption is that the loss of a coin can cause a depression so big that you decide to commit suicide. Before I admit you are right, I will have to see a reasoning with plausible assumptions that leads to counterintuitive and implausible results.

    But let us assume that was Scrooge’s first penny. That makes it more plausible. Even so, there is a bigger problem. In order for me to be held responsible for your murder, there must be a causal chain of events between my invasive act and your death. But this causal chain of events cannot contain acts of persons with free will. The presence of such a being “breaks the chain”.

    So for example, take this scenario:

    1) I punch you in the face. This did not _need_ to result in your death. But it just happened to. Are you responsible for murder? Yes.

    Why? Because there is a causal chain between my invasive act and your death. Uninterrupted by acts of free will.

    2) Now take this scenario: I steal your money. This causes you to go broke. You decide to rob a bank, and while robbing the bank, you kill Mary. Am I responsible for the murder of Mary? No. You are. But didn’t my stealing CAUSE the death of Mary? Yes, it did. There is a chain of causes from my act of stealing to Mary’s death. But the causal chain was interrupted by the an act of free will (yours). So it breaks the chain of responsibility. You become the responsible, because you are the _closest_ person with free will in the chain.

    Now your scenario is analogous to (2). That is why you are not responsible for murder, in your case. The _closest_ person with free will in the causal chain is responsible. It is true I caused your depression, but you decided out of your own free will to kill yourself.

    (Please note that whether free will really exists is irrelevant; the point is that our innate moral sense does have this concept hardwired)

  38. 38 38 Capt. J Parker

    Intentional infliction of emotional distress is tortious but the actions of the person inflicting distress must be extreme and outrageous. My gut tells me this is right, otherwise we are always and everywhere at risk of being sued for perfectly reasonable behavior like displaying the flag on Memorial Day, putting up a creche at Christmas time or making a living as a butcher selling red meat. I have no doubt that any of these actions might cause select people real emotional distress but, it seems to me the law should put the ordered and productive functioning of society ahead of, or at least in parity with guaranteeing each individual an inviolate sphere of physical and emotional tranquility. This is not to say individual rights should not have strong protections but, I’d argue that absolutism is really what sends us places no one wants to go. Firstenberg loses on intentional infliction of emotional distress because his neighbor is not acting in an extreme and outrageous manner.
    Placing cameras in the bathroom is extreme and outrageous. It is also an invasion of property rights. Nullifying property rights harms individuals as well as harming the productive functioning of society so even if the bathroom owner never finds out, the act of placing the camera is still tortious. And my gut says this is right.
    If the photons really are harming Firstenberg then he is due compensation both for the actual physical harm and for the emotional distress but, the burden of proof is on him as the accuser.

  39. 39 39 Harold

    “is a problem for both libertarian and non-libertarian theories.” It seems much less a problem for consequentialist theories, because the scale of the damage is part of the system – it is the consequences that are important. These deal with the problem within their frameworks. A rights based system seems unable to do so. It does not make sense to say that a property owner has the right to prevent invasion by photons, and then say that he can not actually take any action because there is no damage. Either we base the system on absolute rights, or we base it on consequences, but you seem to want the latter, but justified by the former. It does not work.

    Firstenberg offered $10,000 to the defendant to stop using the electrical items. He believed they were causing him damage, and he was prepared to pay to stop it. This allows us to know pretty much with certainty what his willingness to pay to stop the invasion of his home by photons. His case failed because the distress was not considered to be harm resulting from the photons. A rights based approach would say that he is entitled to stop this invasion of photons that he considers to be damaging, and is actually prepared to pay to stop. If we say he has no such right because we don’t agree that they are damaging, then where have his property rights gone? They seem to be in the same place as they would be with a consequentialst theory.

    Keshav’s scenario is a logical extension of your position. Even if it unlikely, it must still be dealt with if you want a system based on rights. I see you have tried to do so, but I think this fails for a similar reason. You say that the responsibility falls on the closest entity with free will in the chain, and there is no attempt to attach degrees of responsibility to contributors to that chain. By using clseness as the criterion, you end up ignoring the degree of contribution.

    “Always remember the emblematic case: if you steal a man’s horse, when his life depends on it, you are not responsible just for theft, but for murder also.” From your argument, only if there is no other choice made by a free will entity that happens after you steal his horse. If you steal his horse when he is being chased by a murderous attacker, you bear no responsibility for his death – that would be the murderer. Similarly, if he then runs after you and has a heart attack – that would be his own fault, since he made a free will choice to chase you. Perhaps you steal his horse when he was being chased by wolves and he is immediately ripped to pieces – then maybe you are responsible. But say he failed to climb a tree, so gets killed. Is he responsible as the closest person with free will to the incident? Say he has to walk home, stops for a drink, then gets killed by wolves. In most cases he will be the closest person with free will, even if making the right choice to save his life is extremely unlikely. If your criterion is closeness to the death not what we might consider causal contribution, we end up with counter-intuitive outcomes.

    So you probably end up with a mixture – as in the property rights and photon invasion. You are responsible as long as the closest free will choice in the causal chain is below some arbitrary threshold.. which places you back with a system based not on absolutes, but on arbitrary judgements.

    If you say OK, but whatever system you propose also requires arbitrary judgements, I say of course. If you say your system is superior because it avoids arbitrary consequentialist judgements, I say no it doesn’t.

  40. 40 40 Maurizio

    #39

    Harold, thank you very much for some very good points that you raise. In some cases you led me to seriously consider that my views could be wrong. But in a very strange way: instead of showing that the libertarian position produces counterintuitive implications, you made me question the correctness of my intuition itself. :)

    Let me examine your objections one by one. The first one is probably the weakest, but there are stronger ones later:

    “If you steal his horse when he is being chased by a murderous attacker, you bear no responsibility for his death – that would be the murderer. ”

    I don’t see how this follows. Sometimes A and B cause C. So, in your case, both you and the attacker jointly caused his death. So you both are responsible. I don’t see a problem in this case. Let’s move on.

    “Similarly, if he then runs after you and has a heart attack – that would be his own fault, since he made a free will choice to chase you. ”

    Good point. I mean: my gut says you are _not_ responsible for the death in this case. Exactly because there was a conscious act to chase me. So I can’t really say that you have produced a counterexample to the libertarian view. However, I am a bit uneasy with saying he is _only_ responsible for the theft; so you might really be on to something here. You make me think that my gut feeling could be “wrong” or too simplistic.

    “Perhaps you steal his horse when he was being chased by wolves and he is immediately ripped to pieces – then maybe you are responsible.”

    Yes, of course. Let’s see where you are going with this:

    “But say he failed to climb a tree, so gets killed. Is he responsible as the closest person with free will to the incident?”

    Very good point here! But his act (to climb the tree) was an instinctive one, not an act of free will. While you are chased by wolves, you act automatically, instinctively. And the mistake climbing a tree is not a conscious act either. That’s why the gut says that the thief responsible for the death. Let us do a countercheck. Let us change the scenario slightly. Let us assume there was some conscious choice at some point. Suppose he had the possibility to take shelter in a cabin in the woods, and time to think about it, but he thought he had left the wolves behind, so he decided to keep walking. In this case, my gut says you are not responsible for his death. And the libertarian theory says the same.

    But wait, I see that you did the same change:

    “Say he has to walk home, stops for a drink, then gets killed by wolves. In most cases he will be the closest person with free will, ”

    Exactly. In this case he is the closest person with free will. The error was earlier, when you assumed that he was the closest person with free will when he failed to climb a tree.

    However, I congratulate for the clever example of being chased by wolves.

    “If your criterion is closeness to the death not what we might consider causal contribution, we end up with counter-intuitive outcomes.”

    I have two problems with this: first, you did not produce counterintuitive results out of libertarian theory, as I hope to have shown. Second, I do consider causal contribution: indeed I explicitely talked about a chain of causes.

    Now to some less interesting issues:

    “You say that the responsibility falls on the closest entity with free will in the chain, and there is no attempt to attach degrees of responsibility to contributors to that chain”

    We saw earlier that we can have joint causation, so joint responsibility. Me and you can work together to kill you, so we can be both responsible. As for contributors, suppose I tell you “kill mark, or else I will punch you in the face”. You then kill Mark. Who is responsible? My gut says I am, not you. because you were not acting of your own free will — you were under threat of invasive acts. Again, this coincides with the libertarian view. Second example: suppose I tell you “kill mark, or else I will not give you my money”. You kill Mark. Who is responsible? My gut says you are responsible, whereas I have no responsibility at all. I predict that probably your gut will not agree with this. But what can I do? It just does not seem right to me to punish the instigator of a crime. It feels to me just as weird as to condemn someone for libel, or failing to rescue.

    “If you say your system is superior because it avoids arbitrary consequentialist judgements, I say no it doesn’t.”

    Of course it doesn’t. As I said earlier, you are responsible for the consequences of your invasive acts. I am not aware of anyone who denies that.

    “It seems much less a problem for consequentialist theories, because the scale of the damage is part of the system – it is the consequences that are important.”

    Again I don’t follow. A libertarian court would assign punshiment by evaluating consequences: how else?

    Thank you very much for the stimulative conversation, Harold.

  41. 41 41 Harold

    The problem I have is that you said “The _closest_ person with free will in the causal chain is responsible.” This seems to allow blame for only one person – the closest. My problem is twofold. 1) I do not see why all the responsibility should be on one person. Several could contribute. 2) The *closest* person may not be the one that would generally be considered to have the major contribution.

    In fact you agree when you say ” Sometimes A and B cause C. So, in your case, both you and the attacker jointly caused his death. So you both are responsible. I don’t see a problem in this case. Let’s move on” But this cannot be true if the closest person is responsible, as you say here “There is a chain of causes from my act of stealing to Mary’s death. But the causal chain was interrupted by the an act of free will (yours). So it breaks the chain of responsibility. You become the responsible, because you are the _closest_ person with free will in the chain.” Why is the causal chain broken here, but not when I steal your horse and you are then murdered?

    In the wolf example, the meaning I was intending was that he failed to *choose* to climb a tree. Maybe he only had a split second, but enough for a free will choice. Lets make it a bit more extreme. A man has deliberately let loose his wolves to kill another man. If they succeed, we can agree I think that the first man is responsible as there is no free will choice between cause and effect – letting the dogs loose and the killing (assuming we will not blame the dogs). But what if the man comes across a cabin, but thinks it may be locked, so estimates he has a better chance by carrying on. He gets eaten, but the cabin was not in fact locked. His free will choice occurs between letting the dogs off, and could have prevented the killing. I do not think he is responsible, even though his free will choice is closer to the actual killing, and had he made the other choice he would have been saved. In this case, the person with th closest free will choice bears almost no responsibility.

  42. 42 42 Harold

    Another thought on the “who let the dogs out” case – my dog is almost certainly able to exercise free will to some extant. If I put a tasty morsel down for her, she will quickly eat it. If I tell her to sit and wait, she usually will, but clearly is very eager to get to the food. Sometimes she doesn’t wait, or waits for a bit then eats before being told. The simplest explanation is that she is making a choice between eating and waiting in much the same way I would, which requires free will. If dogs have free will, then can dogs break the causal chain? If they don’t have free will, why not?

  43. 43 43 nobody.really

    Alas, work keeps me from joining this thoughtful discussion. So I’m reduced to playing the role of killjoy.

    As should be evident by now, it’s really hard to articulate a principle governing liability; even appealing to causation, or “proximate causation,” is inconclusive. These problems are compounded when dealing with more ephemeral kinds of harm.

    Yet people have been wrestling with these issues since time immemorial. In the English (and US) legal system, the accumulated pattern of court decisions is known as common law. (“Pattern” is my euphemistic way of acknowledging that judges are free to render inconsistent decisions, so the effort to find the common scheme reflected in decisions is to some extent an exercise in wishful thinking. Rather, common law is kinda like a non-statistics-based weather forecast: It provides a best guess about how judges might rule in the future based on how judges have ruled in the past.)

    Starting in 1965 the American Law Institute began publishing compendiums of common law “rules” called the Restatements of Law. They are voluminous, contested, and respected. Here’s a taste:

    652B Intrusion Upon Seclusion

    One who intentionally intrudes, physically or otherwise, upon the solitude or seclusion of another or his private affairs or concerns, is subject to liability to the other for invasion of his privacy, if the intrusion would be highly offensive to a reasonable person.

    Comments:

    a. The form of invasion of privacy covered by this Section does not depend upon any publicity given to the person whose interest is invaded or to his affairs. It consists solely of an intentional interference with his interest in solitude or seclusion, either as to his person or as to his private affairs or concerns, of a kind that would be highly offensive to a reasonable man.

    b. The invasion may be by physical intrusion into a place in which the plaintiff has secluded himself, as when the defendant forces his way into the plaintiff’s room in a hotel or insists over the plaintiff’s objection in entering his home. It may also be by the use of the defendant’s senses, with or without mechanical aids, to oversee or overhear the plaintiff’s private affairs, as by looking into his upstairs windows with binoculars or tapping his telephone wires. It may be by some other form of investigation or examination into his private concerns, as by opening his private and personal mail, searching his safe or his wallet, examining his private bank account, or compelling him by a forged court order to permit an inspection of his personal documents. The intrusion itself makes the defendant subject to liability, even though there is no publication or other use of any kind of the photograph or information outlined.

    (Restatement of the Law, Second, Torts (1977)).

    The most recently published treaties in the series is called Restatement of Torts, Third, Liability for Physical and Emotional Harm (2011). So if you want to see the results of a quixotic effort to divine an after-the-fact pattern arising from court cases addressing this topic, there’s your answer.

  44. 44 44 Maurizio

    #41 and 42

    Harold, let me first reply to this great point that you raise:

    “A man has deliberately let loose his wolves to kill another man. … what if the man comes across a cabin, but thinks it may be locked, so estimates he has a better chance by carrying on. He gets eaten, but the cabin was not in fact locked. His free will choice occurs between letting the dogs off, and could have prevented the killing.”

    What a great objection. My hat off to you. For a while I thought you had found a counterexample to the rule (the one that talks about free will etc).

    But that rule is only meant to decide whether you are responsible for the _consequences_ of an invasive act, not for the act itself. You are always responsible for that act, whether or not there was a conscious decision from someone else which could have changed the outcome.

    So, if you kill someone (by means of your wolves, your gun, or anything else), you are always responsible for that death, regardless. The rule only helps us decide whether you are also responsible for the _consequences_ of that death. (Say someone gets depressed and kills themselves).

    (In the above paragraph I assumed you did not kill him in self-defense, or similar cases, in order not to make the sentence unreadable)

    (also notice that wolves are only the instrument you choose to kill; nothing would change if you had personally chased the man with a chainsaw or a gun).

    Let me sum this up for clarity:

    If you make an invasive act, you are responsible for that act, and for its consequences. But since these consequences are an infinite number, in listing the consequences of which you are responsible, we must stop when, in the path that connects your invasive act with its consequences, we encounter the free and conscious choice of someone else. You can’t be held responsible for what follows that.

    That is my conjecture, which we are trying to decide if it is correct or wrong.

    So for example:

    1) If you steal something, you are not only responsible for the theft, but also for its consequences. If among the consequences there is a death, then you are also responsible for the death, provided that in the path of causes between the theft and the death there is no free act of someone else.

    2) if you kill someone with your wolves, you are not only responsible for the death, but also for the consequences of the death. And so on.

    —-

    Now let me address the other objections:

    “The problem I have is that you said “The _closest_ person with free will in the causal chain is responsible.” ”

    Yes, but that was before you led me to refine the rule to account for joint causation. Now thanks to you we have a better rule: “the closest person (or persons) who makes a conscious act is (are) responsible”. Because in the case of joint causation, there might be two or more persons which are equally close. When you scan the graph of causes, starting from the death and going backwards, you may encounter two or more persons who are equally close, because of joint causation. In this case, both are responsible.

    Also remember this rule is only useful to decide if you are responsible for a _consequence_ of your invasive act, not for the act itself. You start from the consequence, proceed backwards, and see if you encounter a free choice of someone else.

    —-

    I said: “You decide to rob a bank, and while robbing the bank, you kill Mary. Am I responsible for the murder of Mary? No. You are. But didn’t my stealing CAUSE the death of Mary? Yes, it did. There is a chain of causes from my act of stealing to Mary’s death. But the causal chain was interrupted by the an act of free will (yours).”

    and you ask: Why is the causal chain broken here, but not when I steal your horse and you are then murdered? ”

    Because, in the case of the horse, there is nothing that can break the chain. (like in the scenario where you are being chased by wolves.) At no time I made a conscious decision that could have saved me. It’s not that I encountered a cabin and decided to ignore it. I was stuck in the desert, with no where to go, doomed to die.

    “In the wolf example, the meaning I was intending was that he failed to *choose* to climb a tree.”

    So it did not occur to him that he could try to climb a tree? But failure to think of something is not a conscious act. There is still no conscious choice on his part. He was being chased, so he was acting instinctively.

    “Maybe he only had a split second, but enough for a free will choice. ”

    Still there was no conscious choice. The possibility just did not occur to him.

    —–

    “my dog is almost certainly able to exercise free will to some extant. … she is making a choice between eating and waiting in much the same way I would,”

    Brilliant observation. I agree.

    ” If dogs have free will, then can dogs break the causal chain? ”

    Yes. If we assume wolves have free will, then by letting the wolves loose, you are not responsible for the death. The wolves are.

    (Notice this does not provide a counterexample to my rule/conjecture)

    Thanks again for the great points you raised.

  45. 45 45 Harold

    “So, if you kill someone (by means of your wolves, your gun, or anything else), you are always responsible for that death, regardless. The rule only helps us decide whether you are also responsible for the _consequences_ of that death. (Say someone gets depressed and kills themselves).”
    But the act in letting the dogs off was not to kill a person. The death came later.

    Lets go with the gun. My act is to pull the trigger. If the gun was not loaded, I could perform the same act, and no one would be harmed. But the gun is loaded, and it is pointing at you. My action causes consequences to inevitably follow – the hammer falls on the firing pin, the chemicals explode, the bullet rushes towards you and it then kills you. There is no free will between pulling the trigger and the bullet killing you, so we can reasonably agree that I have been the cause of your death. However, your death is a consequence of my action, which of itself was not an invasive act. The action was pulling a trigger. So although I am responsible, my action did not kill you.

    I believe that each act of free will between the action and the consequence reduces, but does not remove your responsibility. So if the person stops off for a drink, then gets killed by wolves hours later, your responsibility is reduced significantly. If he does not try the unlocked door, your responsibility is reduced hardly at all. If the dogs have free will, it reduces you responsibility hardly at all IF you knew they were killer wolves, but perhaps quite a lot if a person gets killed by your usually friendly Labrador.

    If we try to assign responsibility to any individual on the basis of proximity to the final cause, I think we will always end up with problems.

  46. 46 46 nobody.really

    Ok, I said I wasn’t gonna get drawn in, but I suspect any decision rule will need to consider both foreseeability/intent and sphere of autonomy. To wit:

    If I do something that invades your sphere of autonomy without your consent, I may be held guilty/liable for neigh unto any harm regardless of how unforeseeable/unintentional (although the magnitude of the liability/punishment may vary depending on whether the harm was foreseeable/intentional).

    - In torts: If I knowingly/intentionally kick you in the shin so I can get in line first, and I accidentally trigger a tumor to metastasize and kill you, I may be liable for wrongful death – notwithstanding the fact that I did not intend and could not have anticipated this outcome.

    - In crime: Under the Felony Murder Rule, if I knowingly/intentionally engage in a felony and it accidentally results in a homicide, I may be held liable for murder even if the homicide was unforeseeable.

    - In crime: If you’re walking at the outside range of my pistol such that I could not really expect to hit you, but shoot with the purpose of hitting you or in reckless disregard of the chance of hitting you, and by chance I do hit you, I’m guilty.

    But if I don’t invade your sphere of autonomy without your consent, then my guilt/liability will depend on how foreseeable it was that my actions would result in the eventual harm.

    - In torts: If I kick you accidently as part of a soccer game (i.e., by agreeing to play, you consented to the possibility of getting kicked in the shin), and this causes a tumor to metastasize and kill you, I would not be liable because I had not invaded your sphere of autonomy without your consent, and the outcome was unforeseeable.

    - In crime: If I cause a homicide generally, my guilt would depend upon how foreseeable it was that my wrongful conduct would have that outcome.

    - EXCEPTION In contract: If I breach a contractual duty to you, I may be liable for the resulting harm to you. But my liability will be limited to the harm that was unavoidable – that is, if you have the opportunity to mitigate damages and you don’t, I won’t be liable for the harm resulting from the failure to mitigate.

    So if you unleash an animal on me and I have the opportunity to run but I don’t, you’re guilty/liable. Your conduct invaded my sphere of autonomy without my consent; I have no duty to mitigate the consequences of your conduct. But if you invite me to the movies and while we’re in line we’re attacked by random animals, you would not generally be guilty/liable for the consequences to me. You had not invaded my sphere of autonomy without my consent, and the results were unforeseeable.

  47. 47 47 Bob Murphy

    Steve, I don’t know if you always see these comments, but just in case: If you’re looking for a new post topic, I’d love to see you compare John Nash’s contributions to game theory and broader mathematics. I saw someone talking about the latter and had no clue what he meant.

  48. 48 48 Pete

    Battery is a crime because it is offensive. Sticking my finger in your chest is battery if you take it that way. Punching you would be battery and assault. Battery seems to be culturally specific, I could conceive of people being distressed by handshakes or hugs.

    These kinds of things make me happy to live in a common law society where reasonable (I hope) judges get to look at the specifics and we don’t need to spell out every “what if” in statute. The more you and the commenters give examples, the more confident I am that coming up with a set of standards that will always yield the results that our reasonable brains like is impossible.

  49. 49 49 Harold

    47: given the tragic recent demise of John Nash, a post on this would be welcome. Does anyone know the number of the taxicab he was riding?
    48: I agree entirely.
    46: I agree basically, but when you introduce things like foreseeability you are straying from an absolute basis.

  50. 50 50 nobody.really

    48: I agree entirely.
    46: I agree basically, but when you introduce things like foreseeability you are straying from an absolute basis.

    Well – yeah, @46 strays from an absolute standard. As does @48, which concludes, “The more you and the commenters give examples, the more confident I am that coming up with a set of standards that will always yield the results that our reasonable brains like is impossible.”

    And that’s kinda the point: There ain’t no absolute standard for evaluating these things. Even causation, or the “last opportunity to exercise free will to produce a less-harmful outcome” standard (a/k/a “You touched it last! Or could have!”), produces indeterminate results.

    For years I’ve been conducting research on this subject. That is, I’ve taken my family on an 11-hr van ride to and from my parents’ house for Xmas holidays. Believe me, if there were a definitive way to delineate spheres of autonomy and allocate fault, my children would have found it by now. There isn’t.

    Libertarianism focuses on the boundaries between the individual and society. For folks with a libertarian bent, it’s gratifying to find those boundaries clearly delineated (“Good fences make good neighbors”) and disquieting to see when they aren’t.

    They aren’t. Be disquieted. My kids are.

  51. 51 51 Maurizio

    #45

    >> “So, if you kill someone (by means of your wolves, your gun, or
    >> anything else), you are always responsible for that death, regardless.
    >> The rule only helps us decide whether you are also responsible for the
    >> _consequences_ of that death. (Say someone gets depressed and kills
    >> themselves).”
    >
    >
    > But the act in letting the dogs off was not to kill a person. The death
    > came later.
    >
    >

    I don’t see how this matters. Your property is an extension of you. What your property kills, you kill. How does what you say change this?

    >
    >
    > Lets go with the gun. My act is to pull the trigger. If the gun was not
    > loaded, I could perform the same act, and no one would be harmed.

    ok…

    >
    > But
    > the gun is loaded, and it is pointing at you. My action causes
    > consequences to inevitably follow – the hammer falls on the firing
    > pin, the chemicals explode, the bullet rushes towards you and it then
    > kills you. There is no free will between pulling the trigger and the
    > bullet killing you, so we can reasonably agree that I have been the
    > cause of your death.

    ok…

    >
    > However, your death is a consequence of my action,
    > which of itself was not an invasive act.

    I think you are saying:

    1) your only act was to pull the trigger.

    2) pulling the trigger is not an invasive act.

    3) therefore according to my rule you should be considered innocent.

    My answer is simply that premise (1) is false. By pulling the trigger you made more than one act: 1) you pulled the trigger, 2) you stuck a bullet in me by means of a gun. (This is the invasive act.)

    An act automatically brings with itself its consequences (stopping when we meet acts of free will).
    So you can’t do just one act. When you do X, you are automatically doing Y.

    If you will, we can add a logical rule:

    if X does Y, then , for each Z that is a consequence of Y, X is also doing Z (assuming that the chain of causation between Y and Z is unbroken by acts of free will).

    >
    >
    > I believe that each act of free will between the action and the
    > consequence reduces, but does not remove your responsibility. So if the
    > person stops off for a drink, then gets killed by wolves hours later,
    > your responsibility is reduced significantly.

    But the damage you made to my property (me) by means of your property (your wolves) is the same, whether or not I was killed instantly or hours later. So why should you be less responsible? It seems counterintuitive.

    >
    > If he does not try the
    > unlocked door, your responsibility is reduced hardly at all.

    I don’t think I denied this.

    >
    > If the dogs
    > have free will, it reduces you responsibility hardly at all IF you knew
    > they were killer wolves,

    but if the wolves are responsible, how can you be responsible?

    If your wolves break my vase, either they must repay the vase, or you must. You cannot both have to repay my vase, because only one vase was broken, not two. Overall you can’t be responsible for two vases.

    >
    > but perhaps quite a lot if a person gets killed
    > by your usually friendly Labrador.
    >
    > If we try to assign responsibility to any individual on the basis of
    > proximity to the final cause, I think we will always end up with
    > problems.

    I am open to whether we can find a counterexample to my “rule”, (don’t know how else to call it), but so far I haven’t seen one.

    Thanks again for the talk.

  52. 52 52 nobody.really

    I think you are saying:

    1) your only act was to pull the trigger.

    2) pulling the trigger is not an invasive act.

    3) therefore according to my rule you should be considered innocent.

    My answer is simply that premise (1) is false. By pulling the trigger you made more than one act: 1) you pulled the trigger, 2) you stuck a bullet in me by means of a gun. (This is the invasive act.)

    An act automatically brings with itself its consequences (stopping when we meet acts of free will).

    So you can’t do just one act. When you do X, you are automatically doing Y.

    If you will, we can add a logical rule: if X does Y, then for each Z that is a consequence of Y, X is also doing Z (assuming that the chain of causation between Y and Z is unbroken by acts of free will).

    Let me again suggest the foreseeability variable rather than the free will variable.

    Scenario 1: We’re in a play in which my character pulls a gun from his holster and shoots your character. Unbeknownst to either of us, someone swapped my prop gun with a real, loaded gun, and I shoot you.

    Did I pull the trigger? Yes. Did I pull the trigger with free will? I guess. Did you get shot as a result? Yes. But were the consequences of my action foreseeable? No. So am I liable/guilty? Probably not.

    Scenario 2: I’m a professional assassin hired to kill you. I draw my gun and pull the trigger – but unbeknownst to me, the police had detected my plan and had filled my gun with blanks. I’m arrested an changed with attempted murder – but I defend myself on the grounds that I didn’t — and under the circumstances, couldn’t – have hurt you because I had no bullets in my gun.

    Did I pull the trigger? Yes. Did I pull the trigger with free will? I guess. Did you get shot as a result? No. Were the consequences of my action foreseeable? Not to me. So am I liable/guilty? Yup.

    In each case, my culpability would hang not on issues of free will, but on issues of foreseeability.

    Scenario 3: It’s a crime to be drunk and disorderly in public. But to convict, the prosecutor must show that the defendant engaged in each of the elements of the crime with the appropriate mental state (e.g., knowingly). I get drunk at home and start abusing my family. They call the cops. The cops arrest me for assault, and then for resisting arrest. And as they drag me kicking and screaming to the police car, they add the charge of drunk & disorderly. Can they make the third charge stick?

    Tough call.

    A. I was drunk, so there are different theories about whether anything I do in that state can be considered an exercise of free will done knowingly. But unless there’s a claim that someone spiked my drinks, the prosecutor can ask the jury to infer that I knowingly consumed alcohol, with knowledge of the likely consequences of doing so.

    B. Even if you conclude that I knowingly got drunk, and knowingly incurred the risk of behaving in disorderly fashion, does it follow that I was knowingly in public? After all, I didn’t go for a drunken stroll down the street; rather, the cops dragged me out of my house quite against my will.

    On the other hand, was it foreseeable that when I drink, I’d get drunk (just as I did the last dozen times I drank)? Was it foreseeable that when I get drunk, I’d behave in a disorderly way (just as I did the last dozen times I drank)? Was it foreseeable that my family would call the cops (as they did the last dozen times I drank)? Was it foreseeable that the cops would arrest me and drag me to a cop car at the curb (as they did the last dozen times I drank)?

    In this instance, is it useful to think of the cops as intervening agents asserting their own free will? Or are the cops really more akin to a foreseeable automatic response that I triggered through my own actions, little different than a bullet coming out of a gun when I pull the trigger?

    Whatever conclusion you draw, I suggest that foreseeability is a useful, and perhaps necessary, variable for thinking about these issues.

  53. 53 53 Harold

    “Whatever conclusion you draw, I suggest that foreseeability is a useful, and perhaps necessary, variable for thinking about these issues.” I conclude you are a drunken oaf. Nobody really behaves that badly, and I have from the horses mouth. “They aren’t. Be disquieted. My kids are.” And I can see why.

    Seriously, I think the examples are great. You have convinced me foreseeability is a good variable.

    In the theatre case, the person who swapped the guns is responsible, I would say. The consequences are foreseeable for him.

    For the hired killer, it raises the issue of prosecuting for attempted anything.

    Maurizio, my trigger pulling was I response to you “if you kill someone…” Whether you killed them is what we are trying to determine, but I think we more or less agree that I would be responsible under any proposed system, since there was no free will intervention. Nut Nobody really’s scenario is good, The act is swapping the gun, there is free will intervention, who is responsible?

  54. 54 54 nobody.really

    Scenario 1:We’re in a play in which my character pulls a gun from his holster and shoots your character. Unbeknownst to either of us, someone swapped my prop gun with a real, loaded gun, and I shoot you.

    Did I pull the trigger? Yes. Did I pull the trigger with free will? I guess. Did you get shot as a result? Yes. But were the consequences of my action foreseeable? No. So am I liable/guilty? Probably not.

    [M]y culpability would hang not on issues of free will, but on issues of foreseeability.

    The act is swapping the gun, there is free will intervention, who is responsible?

    Scenario 1A:We’re in a play in which my character pulls a gun from his holster and shoots your character. Unbeknownst to either of us, the props manager notes that the fake gun had gotten scratched during the last performance. So he pulls out his real gun upon which the fake was modeled and begins touching up the fake – when he has a heart attack. The EMTs arrive to take him to the hospital and, in the confusion, the fake gun is knocked to the floor and under a couch, leaving only the real gun. I pick up the real gun believing it to be fake, and go to perform the scene — resulting in you getting shot.

    So now we have a scenario in which plenty of people exercise apparently their free will to do something, but no one acted with the intent of getting you shot. I humbly suggest that free will is not a very useful prism through which to analyze this issue; foreseeability is.

  55. 55 55 Harold

    Your humility is unnecessary, Uriah. My question was posed in a tired state, and intended to be a rhetorical question the answer to which was the person who swapped the gun, even though there was free will intervention.

    We could argue that the free will intervener had no will to do the bad deed, so in fact was not free after all. It was imposed by the person who swapped the gun and did not inform the shooter. I think that works as a way to frame the issue as free will, but essentially ends up the same. If you take an action that you cannot foresee the results, then your action is not free. That seems to introduce an unnecessary step in order to include free will in the argument.

    It is refreshing to have such a discussion on the internet without descent into slanging, and hopefully has allowed us to understand each others position better, and possibly adapt our own as a consequence. Hats off to SL for providing such a space. I only urge some more thought provoking posts. And whilst I am sloppy and sentimental, I miss Ken B.

Comments are currently closed.