It's been a while since the last update. Unfortunately, not a lot of forensic research has been catching my eye lately. Fortunately, on the other hand, I've been busy with other areas of research. So don't take this radio silence as a sign of inactivity.
I'm writing today not about some new peer-reviewed research, but instead on a talk I just attended. Zocalo is a fairly new organization in Los Angeles which focuses on stimulating dialogue on a wide variety of subjects. I've been to a few before; tonight's topic was crime reduction. Mark Kleiman, a public policy expert and professor at UCLA, presented some suggestions and examples for effectively reducing crime while emptying prisons.
There wasn't much that I hadn't heard before in terms of general principles - swift and focused enforcement and punishment instead of empty threats and general-purpose jail sentences - but it was interesting to hear it from different perspective. As a psychologist, I came in expecting to hear more about the criminals involved and their behavior - why they're committing crime and why they do or don't stop. However, as a policy honcho, Kleiman spoke in terms of running numbers and top-down control. Early on I actually worried that he might be as detached from actual behavior as an economist, especially when he brought out the rational costs-benefits analysis of days in prison per burglary take.
No cause for alarm, though. As he went on and provided more specific examples of programs that have worked, it all started sounding much more familiar. Prison, and the threat of prison, are not effective deterrents because it's an abstract concept, and many of the people who end up there go in with a stoic resignation. But slapping on an anklet with an attached curfew can shake even the hardest thug.
Similarly, focusing interventions works. Kleiman gave some examples of programs with meth users on probation in Hawai'i and cocaine dealers in North Carolina, where law enforcement decided to go to the problem, repeat offenders, the source of the problem, and not lock them up, but give them specific warnings (or threats) with plenty of follow-through. Exactly the same learning principles work for offenders as work for dogs as work for children, etc. Consistency and specificity.
There were some other thought nuggets I appreciated in the talk, like his stance on the death penalty (completely irrelevant to the issue of crime), but primarily I came away with some hope that the system of American law enforcement is gradually turning away from the heavy-handed tactics that landed us with 1% of our population in jail, and turning towards more effective (and cost-effective), focused enforcement. If nothing else, every politician should be reminded that the primary task of law enforcement is not to lock people up, but to prevent crime when possible and punish crime when necessary.
By the way, Mark Kleiman has a new book, When Brute Force Fails. Check it out. Send it to your congressperson.
Tuesday, September 29, 2009
Less Crime, Less Punishment
Friday, June 19, 2009
Ethics & Loss
Kern, M., & Chugh, D. (2009). Bounded Ethicality: The Perils of Loss Framing Psychological Science, 20 (3), 378-384 DOI: 10.1111/j.1467-9280.2009.02296.x
We all know that crime doesn't pay. But what if I told you that crime could prevent you from losing something? Would you be more likely to do it?
Mary Kern and Dolly Chugh put out a study earlier this year that looked at the issues of framing and ethical behavior. You should remember Kahneman and Tversky - I've talked about them before. (reference) If you don't, they were two of the guys who started looking at so-called cognitive biases, specific ways that we think that can sometimes lead to incorrect (or different) conclusions. They appear to stem from our basic cognitive architecture and I'm assuming they're universal, though I must admit that I haven't seen any data either supporting or refuting this.
While there are quite a few biases that we tend to fall victim to, one of their most famous (and the one that Kern and Chugh focus on) is manifest in "prospect theory." The basic tenet of prospect theory is that "Losses Loom Larger than Gains" - that is, it'll hurt a lot more if I give you five dollars and take it away than if I just tell you I'm not going to give you five dollars. Either way you don't have the five dollars, but when I take it away it's framed as a loss; otherwise, it's just not a gain. A similar idea is simply phrasing a probability as either a loss or a gain. For instance, if I said that there is a 5% chance of winning money on a particular gamble, more people would take that gamble than if I said there is a 95% chance of losing money - the probabilities are saying the same thing, but in a different way. (Same thing for if I said there is a 95% chance of winning versus 5% chance of losing - it's not just due to the presence of large numbers.)
The idea behind the current study is that this loss aversion may actually be strong enough to cause people to do things that they wouldn't normally do, in particular, behaving in ways that are considered unethical. (There's always a sticky divide between the definitions of moral and ethical, but the authors state that when they say "ethical" they're referring to a response that has a clear good or bad consensus.) Being from a management school, Kern and Bargh focus on ethical behavior in a business context - their first experiment involves using insider information in a business deal, the second is a negotiation about buying property, and the third is about lying while selling a stereo system. In each of the scenarios previously obtained ratings about the ethicality of the possible options, and there was a clear consensus on what was "right" and "wrong" in each case. In each of the experiments they manipulated losses vs. gains using probabilities (e.g., 25% chance of gaining vs. 75% chance of losing).
In each of the cases, framing the probability as a loss led to people reliably acting in a more unethical manner. So it does seem to be that raising the possibility of losing something will make people more apt to do the "wrong" thing in order to prevent that loss, moreso than they would to gain something they wouldn't otherwise have. But how far can we extend this finding? The argument could be raised that this is all white-collar crime we're talking about here - shady deals, broken promises, and white lies. Would this bias affect people's behavior if they had to get their hands dirtier? One of the consistent findings in the criminology literature is that economic conditions tend to predict crime patterns; when times are good, crime goes down, but crime goes up when times are bad. Could it be that not just not having money, but actually losing money you used to have is what's driving these trends? Let's say someone's given a 65% probability that their home will be foreclosed on, versus a 35% probability that they'll keep their home. Would that make them more or less likely to go out and knock over a liquor store? Well, despite our current economic slump crime stats have been pretty stable, which would argue against that hypothesis. Looking at the situation on a large scale like this might be hiding any small effects, though.
I'm tempted to link this article to another in the same issue of Psych Science - Inzlicht, McGregor, Hirsh, and Nash's "Neural Markers of Religious Conviction" - which links greater religious belief with less activation of the anterior cingulate cortex (ACC). The ACC is associated with several things that are generally considered bad: errors, discrepancies, and pain, for instance - and possibly loss? Despite my attempts to make some story out of the two, however, I haven't come up with an intellectually honest way of doing so. Not just yet, anyways.
We all know that crime doesn't pay. But what if I told you that crime could prevent you from losing something? Would you be more likely to do it?
Mary Kern and Dolly Chugh put out a study earlier this year that looked at the issues of framing and ethical behavior. You should remember Kahneman and Tversky - I've talked about them before. (reference) If you don't, they were two of the guys who started looking at so-called cognitive biases, specific ways that we think that can sometimes lead to incorrect (or different) conclusions. They appear to stem from our basic cognitive architecture and I'm assuming they're universal, though I must admit that I haven't seen any data either supporting or refuting this.
While there are quite a few biases that we tend to fall victim to, one of their most famous (and the one that Kern and Chugh focus on) is manifest in "prospect theory." The basic tenet of prospect theory is that "Losses Loom Larger than Gains" - that is, it'll hurt a lot more if I give you five dollars and take it away than if I just tell you I'm not going to give you five dollars. Either way you don't have the five dollars, but when I take it away it's framed as a loss; otherwise, it's just not a gain. A similar idea is simply phrasing a probability as either a loss or a gain. For instance, if I said that there is a 5% chance of winning money on a particular gamble, more people would take that gamble than if I said there is a 95% chance of losing money - the probabilities are saying the same thing, but in a different way. (Same thing for if I said there is a 95% chance of winning versus 5% chance of losing - it's not just due to the presence of large numbers.)
The idea behind the current study is that this loss aversion may actually be strong enough to cause people to do things that they wouldn't normally do, in particular, behaving in ways that are considered unethical. (There's always a sticky divide between the definitions of moral and ethical, but the authors state that when they say "ethical" they're referring to a response that has a clear good or bad consensus.) Being from a management school, Kern and Bargh focus on ethical behavior in a business context - their first experiment involves using insider information in a business deal, the second is a negotiation about buying property, and the third is about lying while selling a stereo system. In each of the scenarios previously obtained ratings about the ethicality of the possible options, and there was a clear consensus on what was "right" and "wrong" in each case. In each of the experiments they manipulated losses vs. gains using probabilities (e.g., 25% chance of gaining vs. 75% chance of losing).
In each of the cases, framing the probability as a loss led to people reliably acting in a more unethical manner. So it does seem to be that raising the possibility of losing something will make people more apt to do the "wrong" thing in order to prevent that loss, moreso than they would to gain something they wouldn't otherwise have. But how far can we extend this finding? The argument could be raised that this is all white-collar crime we're talking about here - shady deals, broken promises, and white lies. Would this bias affect people's behavior if they had to get their hands dirtier? One of the consistent findings in the criminology literature is that economic conditions tend to predict crime patterns; when times are good, crime goes down, but crime goes up when times are bad. Could it be that not just not having money, but actually losing money you used to have is what's driving these trends? Let's say someone's given a 65% probability that their home will be foreclosed on, versus a 35% probability that they'll keep their home. Would that make them more or less likely to go out and knock over a liquor store? Well, despite our current economic slump crime stats have been pretty stable, which would argue against that hypothesis. Looking at the situation on a large scale like this might be hiding any small effects, though.
I'm tempted to link this article to another in the same issue of Psych Science - Inzlicht, McGregor, Hirsh, and Nash's "Neural Markers of Religious Conviction" - which links greater religious belief with less activation of the anterior cingulate cortex (ACC). The ACC is associated with several things that are generally considered bad: errors, discrepancies, and pain, for instance - and possibly loss? Despite my attempts to make some story out of the two, however, I haven't come up with an intellectually honest way of doing so. Not just yet, anyways.
Thursday, March 19, 2009
This Place is a Dump... Let's Trash It: Experimentally Testing the "Broken Window" Hypothesis
Keizer, K., Lindenberg, S., & Steg, L. (2008). The Spreading of Disorder Science, 322 (5908), 1681-1685 DOI: 10.1126/science.1161405
In class we've talked a lot about theories of crime that focus on characteristics of the person (e.g., biological, psychological) or characteristics of the social environment and culture (e.g., sociological, learning). But what about the actual physical environment? Can the simple fact of where a person is located influence him to break the law?
"Broken Window Theory" suggests exactly that - that the more disorder is evident in an environment, the more petty crime and further disorder will spread among people. (The name comes from the idea that there when one window in a house is broken, the others will go before long.) As the authors note, New York was counting on this theory in the mid-90's when they started an anti-graffiti and street cleanliness crackdown. This coincided with a decrease in petty crime, but since this was a quasi-experiment (at best) we can't really know what the cause was. Another big problem with this theory is that no one has ever really defined what's meant by the term "disorder."
Keizer, Lindenberg, and Steg set about trying to test Broken Window Theory experimentally, with better operationalization of the variables than had been attempted before. They conducted their experiments on unsuspecting members of the public (has anyone already claimed the term "guerrila psychology"?) in two conditions: either a norm was violated or not (the contextual norm; example: graffiti right next to a "No graffiti" sign). The researchers then observed to see what people would do when presented with the opportunity of violating another, unrelated norm (the target norm; example: finding a flier attached to your bike handle, do you litter or pocket it?). The difference between the contextual norm and the target norm is major - broken window theory argues that seeing this disorder isn't just priming people to commit the same crime, but just to commit crimes in general.
So what did they find? In a series of six experiments they found that violation of the context norm definitely increased people's likelihood of violating the target norm. The percentage of people violating the norm was consistently 2 to 3 times greater when the context norm was violated, for target norms including littering, trespassing, and even stealing money from a mailbox.
These findings seem to fit with research on justice, which looks at how people feel about law enforcement, among other things. One of the main findings is that how law enforcement treats a person influences how likely that person is to break the law - if a person experiences rude or unfair treatment at the hands of a police officer, he or she will have a less negative attitude towards violating laws. (Unfortunately, the reverse doesn't seem to be true; once people have this negative attitude, fair treatment doesn't do much to change their minds.) The mediating factor in this relationship seems to be a perception of legitimacy. That is, if I've had good relations with police, I'll feel like they're good people to listen to, and this legal system is a good one to follow. However, if I've had negative interactions with the police, then I'll feel like this legal system doesn't have as much relevance for me. Going along with this idea, it may be that when people see that others have broken the law, they'll feel that there's not much chance of enforcement - that the legal system doesn't have much legitimacy around here.
Contrarily, there's idea of injunctive norms (standards of behavior for what not to do) and descriptive norms (what it's evident that most people do in a situation). The authors argue that when these norms conflict, as in their experiments, people will turn to other motivations, like what's easiest, in deciding what to do. At the end of the article the authors state that seeing the injunctive norms violated "...results in the inhibition of other norms... So once disorder has spread, merely fixing the broken windows or removing the graffiti may not be sufficient anymore." They hadn't mentioned before how or what evidence there is for norms being permanently changed like this. If this is the case, it'll be hard differentiating between a legitimacy explanation and a conflicting norms explanation. I'll have to think about this some more...
In class we've talked a lot about theories of crime that focus on characteristics of the person (e.g., biological, psychological) or characteristics of the social environment and culture (e.g., sociological, learning). But what about the actual physical environment? Can the simple fact of where a person is located influence him to break the law?
"Broken Window Theory" suggests exactly that - that the more disorder is evident in an environment, the more petty crime and further disorder will spread among people. (The name comes from the idea that there when one window in a house is broken, the others will go before long.) As the authors note, New York was counting on this theory in the mid-90's when they started an anti-graffiti and street cleanliness crackdown. This coincided with a decrease in petty crime, but since this was a quasi-experiment (at best) we can't really know what the cause was. Another big problem with this theory is that no one has ever really defined what's meant by the term "disorder."
Keizer, Lindenberg, and Steg set about trying to test Broken Window Theory experimentally, with better operationalization of the variables than had been attempted before. They conducted their experiments on unsuspecting members of the public (has anyone already claimed the term "guerrila psychology"?) in two conditions: either a norm was violated or not (the contextual norm; example: graffiti right next to a "No graffiti" sign). The researchers then observed to see what people would do when presented with the opportunity of violating another, unrelated norm (the target norm; example: finding a flier attached to your bike handle, do you litter or pocket it?). The difference between the contextual norm and the target norm is major - broken window theory argues that seeing this disorder isn't just priming people to commit the same crime, but just to commit crimes in general.
So what did they find? In a series of six experiments they found that violation of the context norm definitely increased people's likelihood of violating the target norm. The percentage of people violating the norm was consistently 2 to 3 times greater when the context norm was violated, for target norms including littering, trespassing, and even stealing money from a mailbox.
These findings seem to fit with research on justice, which looks at how people feel about law enforcement, among other things. One of the main findings is that how law enforcement treats a person influences how likely that person is to break the law - if a person experiences rude or unfair treatment at the hands of a police officer, he or she will have a less negative attitude towards violating laws. (Unfortunately, the reverse doesn't seem to be true; once people have this negative attitude, fair treatment doesn't do much to change their minds.) The mediating factor in this relationship seems to be a perception of legitimacy. That is, if I've had good relations with police, I'll feel like they're good people to listen to, and this legal system is a good one to follow. However, if I've had negative interactions with the police, then I'll feel like this legal system doesn't have as much relevance for me. Going along with this idea, it may be that when people see that others have broken the law, they'll feel that there's not much chance of enforcement - that the legal system doesn't have much legitimacy around here.
Contrarily, there's idea of injunctive norms (standards of behavior for what not to do) and descriptive norms (what it's evident that most people do in a situation). The authors argue that when these norms conflict, as in their experiments, people will turn to other motivations, like what's easiest, in deciding what to do. At the end of the article the authors state that seeing the injunctive norms violated "...results in the inhibition of other norms... So once disorder has spread, merely fixing the broken windows or removing the graffiti may not be sufficient anymore." They hadn't mentioned before how or what evidence there is for norms being permanently changed like this. If this is the case, it'll be hard differentiating between a legitimacy explanation and a conflicting norms explanation. I'll have to think about this some more...
Saturday, February 14, 2009
Do you not get it, or do you just not care? Psychopaths and mirror neurons
FECTEAU, S., PASCUALLEONE, A., & THEORET, H. (2008). Psychopathy and the mirror neuron system: Preliminary findings from a non-psychiatric sample Psychiatry Research, 160 (2), 137-144 DOI: 10.1016/j.psychres.2007.08.022
AGNEW, Z., BHAKOO, K., & PURI, B. (2007). The human mirror system: A motor resonance theory of mind-reading Brain Research Reviews, 54 (2), 286-293 DOI: 10.1016/j.brainresrev.2007.04.003
One of the main hallmarks of psychopathy is a lack of empathy. A psychopath is able to look at a person who's suffering and not feel the unease that you or I (hopefully) would. The idea, then, is that this empathic understanding that other people have the same feelings as you or I is a major deterrent to harming others; we feel what we cause them to feel, so ultimately we're looking out for our own self-interest in not feeling bad.
Well, within the past few years it looks as though researchers have identified one of the major components of how empathy is generated in the brain. Agnew, Bhakroo, and Puri do a good job of summing up the current state of knowledge, but a few highlights: Electrode studies in monkeys have revealed that particular neurons are activated whenever the monkey performs a certain action, but also when observing another monkey perform that action. Hence the name, "mirror neurons." Importantly, mirror neurons provide a link between perception and action. Beyond simple movements, mirror neurons also provide an explanation for how people can decipher others' intentions; we can differentiate whether a person is reaching for a cup or just moving our hands towards it, for example. In addition, mirror neurons have been linked with the ability to interpret and feel the emotions of others. This is what is usually referred to by the term "empathy," or emotional empathy, which can be contrasted with the motor empathy observed when viewing others' actions.
So the idea is that psychopaths are literally unable to imagine and feel the pain that they inflict on others. Fecteau and his colleagues set out to test this idea using TMS - Transcranial Magnetic Stimulation. (Warning: excessive abbreviations in this article. I'll do my best to navigate through it, but it can still get confusing. Damn psychiatrists.) The TMS apparatus is a high-powered (but hand-held) electromagnetic coil that can pick up patterns of neural activity. It can work in reverse, too, acting as a signal emitter as well as a receiver. Other researchers have used it to selectively knock out brain regions temporarily. Previous studies have found that witnessing a painful event will result in a temporary dampening in neural firing in response to TM stimulation (a motor-evoked potential, or MEP), but which is specific to the nerves connecting to the region that's receiving the pain. So: stimulate the somatosensory cortex with TMS and you get a MEP you can pick up on an electromyocardiogram; see someone getting hurt in the region associated with the stimulated area and the MEP gets dampened. If this dampening of the MEP signal is not observed, then the mirror neurons aren't responding to this particular stimulus, and aren't interpreting perception of others' pain into empathic pain.
So is this what they saw in psychopaths? Quick answer: no. Long answer: they saw the opposite.
Okay, to be fair, they didn't actually look at psychopaths. Instead they used the Psychopathic Personality Inventory to get a scalar measure of psychopathic traits in a normal population. There was some variation in the scores, however, so they had a range of personality profiles among their subjects. And back to the results, they actually found that people who scored high on one subscale, coldheartedness, actually showed greater dampening of the MEP signal when witnessing pain. What's more, coldheartedness is the subscale that most directly assesses empathy - whether people are sensitive to the suffering of others or not. So what's going on? Well, the authors argue that they're measuring motor empathy, but that in psychopathic populations this becomes divorced from emotional empathy. In fact, psychopathic people don't have any impairment in recognizing emotions or identifying when others are in pain; in fact, if anything they're better at it than non-psychopaths. This is part of the reason why they're so good at manipulating people. If this theory is correct (and to bookend with my title), then a psychopath knows exactly what you're going through; they just don't care.
AGNEW, Z., BHAKOO, K., & PURI, B. (2007). The human mirror system: A motor resonance theory of mind-reading Brain Research Reviews, 54 (2), 286-293 DOI: 10.1016/j.brainresrev.2007.04.003
One of the main hallmarks of psychopathy is a lack of empathy. A psychopath is able to look at a person who's suffering and not feel the unease that you or I (hopefully) would. The idea, then, is that this empathic understanding that other people have the same feelings as you or I is a major deterrent to harming others; we feel what we cause them to feel, so ultimately we're looking out for our own self-interest in not feeling bad.
Well, within the past few years it looks as though researchers have identified one of the major components of how empathy is generated in the brain. Agnew, Bhakroo, and Puri do a good job of summing up the current state of knowledge, but a few highlights: Electrode studies in monkeys have revealed that particular neurons are activated whenever the monkey performs a certain action, but also when observing another monkey perform that action. Hence the name, "mirror neurons." Importantly, mirror neurons provide a link between perception and action. Beyond simple movements, mirror neurons also provide an explanation for how people can decipher others' intentions; we can differentiate whether a person is reaching for a cup or just moving our hands towards it, for example. In addition, mirror neurons have been linked with the ability to interpret and feel the emotions of others. This is what is usually referred to by the term "empathy," or emotional empathy, which can be contrasted with the motor empathy observed when viewing others' actions.
So the idea is that psychopaths are literally unable to imagine and feel the pain that they inflict on others. Fecteau and his colleagues set out to test this idea using TMS - Transcranial Magnetic Stimulation. (Warning: excessive abbreviations in this article. I'll do my best to navigate through it, but it can still get confusing. Damn psychiatrists.) The TMS apparatus is a high-powered (but hand-held) electromagnetic coil that can pick up patterns of neural activity. It can work in reverse, too, acting as a signal emitter as well as a receiver. Other researchers have used it to selectively knock out brain regions temporarily. Previous studies have found that witnessing a painful event will result in a temporary dampening in neural firing in response to TM stimulation (a motor-evoked potential, or MEP), but which is specific to the nerves connecting to the region that's receiving the pain. So: stimulate the somatosensory cortex with TMS and you get a MEP you can pick up on an electromyocardiogram; see someone getting hurt in the region associated with the stimulated area and the MEP gets dampened. If this dampening of the MEP signal is not observed, then the mirror neurons aren't responding to this particular stimulus, and aren't interpreting perception of others' pain into empathic pain.
So is this what they saw in psychopaths? Quick answer: no. Long answer: they saw the opposite.
Okay, to be fair, they didn't actually look at psychopaths. Instead they used the Psychopathic Personality Inventory to get a scalar measure of psychopathic traits in a normal population. There was some variation in the scores, however, so they had a range of personality profiles among their subjects. And back to the results, they actually found that people who scored high on one subscale, coldheartedness, actually showed greater dampening of the MEP signal when witnessing pain. What's more, coldheartedness is the subscale that most directly assesses empathy - whether people are sensitive to the suffering of others or not. So what's going on? Well, the authors argue that they're measuring motor empathy, but that in psychopathic populations this becomes divorced from emotional empathy. In fact, psychopathic people don't have any impairment in recognizing emotions or identifying when others are in pain; in fact, if anything they're better at it than non-psychopaths. This is part of the reason why they're so good at manipulating people. If this theory is correct (and to bookend with my title), then a psychopath knows exactly what you're going through; they just don't care.
Monday, January 26, 2009
Causes of violence: Take 2
Lee, T., Chan, S., & Raine, A. (2008). Strong limbic and weak frontal activation to aggressive stimuli in spouse abusers Molecular Psychiatry, 13 (7), 655-656 DOI: 10.1038/mp.2008.46
Raine, A. (2008). From Genes to Brain to Antisocial Behavior Current Directions in Psychological Science, 17 (5), 323-328 DOI: 10.1111/j.1467-8721.2008.00599.x
If the name Adrian Raine sounds familiar to you, then congratulations! You were paying attention. Raine was part of the USC group we had talked about in class – the one with the English accent, remember? Though Raine is now at the University of Pennsylvania, he's still putting out high-quality research looking into the biological bases of criminality.
Case in point: his 2008 paper with Lee and Chan. This was an imaging study performed to look for neurological differences between men who abuse their spouses and those who don't. Their idea is that men who tend towards battery are less able to control their reactions to negative emotions, and thus more likely to act on them – violently. This study relied on the old psychological standby of inhibition, the Stroop task. However, they also used a modified Stroop that focused on emotional content. What they found was that the abusers weren't any worse than the controls on the basic Stroop task, but when emotions were brought in they suddenly became significantly slower to react. This was reinforced by the fMRI data, which showed both less activity in prefrontal regions, responsible for control and inhibition, and greater activity in limbic regions (associated with emotion) including the cingulate gyrus and hippocampus.
So there are psychobiological distinctions between men who do and do not abuse their spouses. This is fantastic!, right? We can just do brain scans on suspects to determine whether or not they're at high risk of offending and separate them – maybe for treatment, maybe just for the protection of society. Well, if you're still excited about these possibilities (and I would have hoped that my neo-Orwellian rhetoric would have turned you off by now), let me refer you to his second paper, published in Current Directions in Psych. Science. Here, Raine quickly sums up where we stand in the nature/nurture dialogue (it's not a debate anymore) on antisociality: “...the field is now moving on to the more important, third-generation question: ‘Which genes predispose to which kinds of antisocial behavior?'” Part of that answer is mutations in genes like MAOA leading to behavior like deficits in moral reasoning.
However, there are two critical things to remember: one, that genes only code for proteins (usually), and not for specific behaviors; and two, that biology is not destiny. Raine points out that there are epigenetic factors that help determine an ultimate phenotype. The idea that what's coded in DNA doesn't exactly translate to biological manifestations isn't new, but it's easy to forget. However, things like diet or hormone levels in pregnant mothers can and do have life-long consequences in terms of development. Raine brings up some pointed ethical questions in his penultimate paragraph that should sound familiar to anyone who's studied or thought about genetics and behavior. The only thing I'll add to that is a reminder to exercise restraint when interpreting any study that claims to have found a link between some gene and... well, anything, really.
Raine, A. (2008). From Genes to Brain to Antisocial Behavior Current Directions in Psychological Science, 17 (5), 323-328 DOI: 10.1111/j.1467-8721.2008.00599.x
If the name Adrian Raine sounds familiar to you, then congratulations! You were paying attention. Raine was part of the USC group we had talked about in class – the one with the English accent, remember? Though Raine is now at the University of Pennsylvania, he's still putting out high-quality research looking into the biological bases of criminality.
Case in point: his 2008 paper with Lee and Chan. This was an imaging study performed to look for neurological differences between men who abuse their spouses and those who don't. Their idea is that men who tend towards battery are less able to control their reactions to negative emotions, and thus more likely to act on them – violently. This study relied on the old psychological standby of inhibition, the Stroop task. However, they also used a modified Stroop that focused on emotional content. What they found was that the abusers weren't any worse than the controls on the basic Stroop task, but when emotions were brought in they suddenly became significantly slower to react. This was reinforced by the fMRI data, which showed both less activity in prefrontal regions, responsible for control and inhibition, and greater activity in limbic regions (associated with emotion) including the cingulate gyrus and hippocampus.
So there are psychobiological distinctions between men who do and do not abuse their spouses. This is fantastic!, right? We can just do brain scans on suspects to determine whether or not they're at high risk of offending and separate them – maybe for treatment, maybe just for the protection of society. Well, if you're still excited about these possibilities (and I would have hoped that my neo-Orwellian rhetoric would have turned you off by now), let me refer you to his second paper, published in Current Directions in Psych. Science. Here, Raine quickly sums up where we stand in the nature/nurture dialogue (it's not a debate anymore) on antisociality: “...the field is now moving on to the more important, third-generation question: ‘Which genes predispose to which kinds of antisocial behavior?'” Part of that answer is mutations in genes like MAOA leading to behavior like deficits in moral reasoning.
However, there are two critical things to remember: one, that genes only code for proteins (usually), and not for specific behaviors; and two, that biology is not destiny. Raine points out that there are epigenetic factors that help determine an ultimate phenotype. The idea that what's coded in DNA doesn't exactly translate to biological manifestations isn't new, but it's easy to forget. However, things like diet or hormone levels in pregnant mothers can and do have life-long consequences in terms of development. Raine brings up some pointed ethical questions in his penultimate paragraph that should sound familiar to anyone who's studied or thought about genetics and behavior. The only thing I'll add to that is a reminder to exercise restraint when interpreting any study that claims to have found a link between some gene and... well, anything, really.
Monday, January 12, 2009
Youth Violence is Like a Rose... Wait, That's Not It
Dodge, K. (2008). Framing public policy and prevention of chronic violence in American youths. American Psychologist, 63 (7), 573-590 DOI: 10.1037/0003-066X.63.7.573
I'm taking a bit of a different tact this time, in that the paper I'm discussing isn't a research paper. It's not quite a standard review paper, either, but it draws on psychology research to present ideas for application. It's also a great paper for showing the effect that psychology can have on different fields, and vice-versa. Economics, public policy, public health, and sociology all contribute to the understanding of the problem of youth violence. Research from all the different areas of psychology – developmental, cognitive, abnormal, etc. -- touch on similar areas as these other disciplines, and examining the intersections of these fields.
So this paper is about a very serious topic, youth violence. However, just reading the title of this paper it might seem to be somewhat trivial. I mean, coming up with metaphors for what youth violence is like? That's not science, right? Does that even matter? However, if you've read the paper (and I hope you have) you'll see that Dodge tries to make the case that finding an accurate metaphor, or the right frame, for a problem is critically important from a public health and policy standpoint. Psychologists know a lot about what contributes to youth violence and what doesn't, but that really doesn't make a difference unless that information ends up in the hands of people who are in a position to make changes to the root causes.
Obviously not everyone is a scientist, and even if you are a scientist that doesn't mean you're always objective and thorough in evaluating evidence. Daniel Kahneman and Amos Tversky are some of the major names when it comes to theories behind how people use (or don't use) reasoning to think about issues. For instance, they developed what's called “Prospect theory,” which looks at how people view gains and losses in making decisions. Instead of being purely rational systems, which would decide whichever way had even a slight shift towards a net gain, Kahneman and Tversky found that people view a loss as being much more significant than a comparable gain, and will make decisions accordingly. In fact, it appears that people aren't comfortable making a risky decision unless the potential for a gain is two times the potential for a loss.
Which brings us back to framing: how you set up an argument has a major influence on whether or not someone accepts and acts on a position. Dodge lists several failed metaphors – metaphors that either inaccurately portray youth violence or haven't gained traction in the public mind – as well as four recommendations of his own for metaphors that might be more successful. A critical aspect of his suggested metaphors is that most of them accentuate the potential losses if the problems aren't addressed; if you don't take preventative measures to lower your blood pressure, you are likely to develop heart disease, for instance.
This isn't the only issue, of course. Dodge describes several other considerations that stem from psychological theory and research, such as analogical transfer and comprehension, but I'll let you read about those on your own. Here's a question to think about, though: do you think that a single metaphor is adequate to capture and describe a complex phenomenon like youth violence for the public? For instance, cultural and subgroup differences might be important in determining whether or not a frame is accepted or rejected. On the other hand, it may be that presenting multiple frames has unintended consequences, such as reduced confidence in any solutions that are presented.
I'm taking a bit of a different tact this time, in that the paper I'm discussing isn't a research paper. It's not quite a standard review paper, either, but it draws on psychology research to present ideas for application. It's also a great paper for showing the effect that psychology can have on different fields, and vice-versa. Economics, public policy, public health, and sociology all contribute to the understanding of the problem of youth violence. Research from all the different areas of psychology – developmental, cognitive, abnormal, etc. -- touch on similar areas as these other disciplines, and examining the intersections of these fields.
So this paper is about a very serious topic, youth violence. However, just reading the title of this paper it might seem to be somewhat trivial. I mean, coming up with metaphors for what youth violence is like? That's not science, right? Does that even matter? However, if you've read the paper (and I hope you have) you'll see that Dodge tries to make the case that finding an accurate metaphor, or the right frame, for a problem is critically important from a public health and policy standpoint. Psychologists know a lot about what contributes to youth violence and what doesn't, but that really doesn't make a difference unless that information ends up in the hands of people who are in a position to make changes to the root causes.
Obviously not everyone is a scientist, and even if you are a scientist that doesn't mean you're always objective and thorough in evaluating evidence. Daniel Kahneman and Amos Tversky are some of the major names when it comes to theories behind how people use (or don't use) reasoning to think about issues. For instance, they developed what's called “Prospect theory,” which looks at how people view gains and losses in making decisions. Instead of being purely rational systems, which would decide whichever way had even a slight shift towards a net gain, Kahneman and Tversky found that people view a loss as being much more significant than a comparable gain, and will make decisions accordingly. In fact, it appears that people aren't comfortable making a risky decision unless the potential for a gain is two times the potential for a loss.
Which brings us back to framing: how you set up an argument has a major influence on whether or not someone accepts and acts on a position. Dodge lists several failed metaphors – metaphors that either inaccurately portray youth violence or haven't gained traction in the public mind – as well as four recommendations of his own for metaphors that might be more successful. A critical aspect of his suggested metaphors is that most of them accentuate the potential losses if the problems aren't addressed; if you don't take preventative measures to lower your blood pressure, you are likely to develop heart disease, for instance.
This isn't the only issue, of course. Dodge describes several other considerations that stem from psychological theory and research, such as analogical transfer and comprehension, but I'll let you read about those on your own. Here's a question to think about, though: do you think that a single metaphor is adequate to capture and describe a complex phenomenon like youth violence for the public? For instance, cultural and subgroup differences might be important in determining whether or not a frame is accepted or rejected. On the other hand, it may be that presenting multiple frames has unintended consequences, such as reduced confidence in any solutions that are presented.
Subscribe to:
Posts (Atom)