Listening to Season One of NPR’s podcast Serial, which is the story of a real-life murder case, I came away about 80% sure that the defendant was guilty and 100% sure that I’d vote to convict him. This got me to pondering whether my standards for reasonable doubt (apparently less than 80% in this case) are in fact reasonable.
So I wrote down the simplest model I could think of — a model too simple to give useful numerical cutoffs, but still a starting point — and I learned something surprising. Namely (at least in this very simple model), the harsher the prospective punishment, the laxer you should be about reasonable doubt. Or to say this another way: When the penalty is a year in jail, you should vote to convict only when the evidence is very strong. When the penalty is 50 years, you should vote to convict even when it’s pretty weak.
(The standard here for what you “should” do is this: When you lower your standards, you increase the chance that Mr. or Ms. Average will be convicted of a crime, and lower the chance that the same Mr. or Ms. Average will become a crime victim. The right standard is the one that balances those risks in the way that Mr. or Ms. Average finds the least distasteful.)
Here (I think) is what’s going on: A weak penalty has very little deterrent effect — so little that it’s not worth convicting an innocent person over. But a strong penalty can have such a large deterrent effect that it’s worth tolerating a lot of false convictions to get a few true ones.
In case I’ve made any mistakes (and it wouldn’t be the first time), you can check this for yourself. (Trigger warning: This might get slightly geeky.) I assumed each crime has a fixed cost C to the victim and a random benefit B to the perpetrator. For concreteness, we can take C=2 and take Log(B) to be a standard normal distribution, though the results are pretty robust to these particulars. (Or, much more simply and probably more sensibly, take B to be uniformly distributed from 0 to C — the qualitative results are unchanged by this.)