Kern, M., & Chugh, D. (2009). Bounded Ethicality: The Perils of Loss Framing Psychological Science, 20 (3), 378-384 DOI: 10.1111/j.1467-9280.2009.02296.x
We all know that crime doesn't pay. But what if I told you that crime could prevent you from losing something? Would you be more likely to do it?
Mary Kern and Dolly Chugh put out a study earlier this year that looked at the issues of framing and ethical behavior. You should remember Kahneman and Tversky - I've talked about them before. (reference) If you don't, they were two of the guys who started looking at so-called cognitive biases, specific ways that we think that can sometimes lead to incorrect (or different) conclusions. They appear to stem from our basic cognitive architecture and I'm assuming they're universal, though I must admit that I haven't seen any data either supporting or refuting this.
While there are quite a few biases that we tend to fall victim to, one of their most famous (and the one that Kern and Chugh focus on) is manifest in "prospect theory." The basic tenet of prospect theory is that "Losses Loom Larger than Gains" - that is, it'll hurt a lot more if I give you five dollars and take it away than if I just tell you I'm not going to give you five dollars. Either way you don't have the five dollars, but when I take it away it's framed as a loss; otherwise, it's just not a gain. A similar idea is simply phrasing a probability as either a loss or a gain. For instance, if I said that there is a 5% chance of winning money on a particular gamble, more people would take that gamble than if I said there is a 95% chance of losing money - the probabilities are saying the same thing, but in a different way. (Same thing for if I said there is a 95% chance of winning versus 5% chance of losing - it's not just due to the presence of large numbers.)
The idea behind the current study is that this loss aversion may actually be strong enough to cause people to do things that they wouldn't normally do, in particular, behaving in ways that are considered unethical. (There's always a sticky divide between the definitions of moral and ethical, but the authors state that when they say "ethical" they're referring to a response that has a clear good or bad consensus.) Being from a management school, Kern and Bargh focus on ethical behavior in a business context - their first experiment involves using insider information in a business deal, the second is a negotiation about buying property, and the third is about lying while selling a stereo system. In each of the scenarios previously obtained ratings about the ethicality of the possible options, and there was a clear consensus on what was "right" and "wrong" in each case. In each of the experiments they manipulated losses vs. gains using probabilities (e.g., 25% chance of gaining vs. 75% chance of losing).
In each of the cases, framing the probability as a loss led to people reliably acting in a more unethical manner. So it does seem to be that raising the possibility of losing something will make people more apt to do the "wrong" thing in order to prevent that loss, moreso than they would to gain something they wouldn't otherwise have. But how far can we extend this finding? The argument could be raised that this is all white-collar crime we're talking about here - shady deals, broken promises, and white lies. Would this bias affect people's behavior if they had to get their hands dirtier? One of the consistent findings in the criminology literature is that economic conditions tend to predict crime patterns; when times are good, crime goes down, but crime goes up when times are bad. Could it be that not just not having money, but actually losing money you used to have is what's driving these trends? Let's say someone's given a 65% probability that their home will be foreclosed on, versus a 35% probability that they'll keep their home. Would that make them more or less likely to go out and knock over a liquor store? Well, despite our current economic slump crime stats have been pretty stable, which would argue against that hypothesis. Looking at the situation on a large scale like this might be hiding any small effects, though.
I'm tempted to link this article to another in the same issue of Psych Science - Inzlicht, McGregor, Hirsh, and Nash's "Neural Markers of Religious Conviction" - which links greater religious belief with less activation of the anterior cingulate cortex (ACC). The ACC is associated with several things that are generally considered bad: errors, discrepancies, and pain, for instance - and possibly loss? Despite my attempts to make some story out of the two, however, I haven't come up with an intellectually honest way of doing so. Not just yet, anyways.