Advertisement

SKIP ADVERTISEMENT

The Brain on the Stand

Credit...Brendan Monroe

I. Mr. Weinstein’s Cyst When historians of the future try to identify the moment that neuroscience began to transform the American legal system, they may point to a little-noticed case from the early 1990s. The case involved Herbert Weinstein, a 65-year-old ad executive who was charged with strangling his wife, Barbara, to death and then, in an effort to make the murder look like a suicide, throwing her body out the window of their 12th-floor apartment on East 72nd Street in Manhattan. Before the trial began, Weinstein’s lawyer suggested that his client should not be held responsible for his actions because of a mental defect — namely, an abnormal cyst nestled in his arachnoid membrane, which surrounds the brain like a spider web.

The implications of the claim were considerable. American law holds people criminally responsible unless they act under duress (with a gun pointed at the head, for example) or if they suffer from a serious defect in rationality — like not being able to tell right from wrong. But if you suffer from such a serious defect, the law generally doesn’t care why — whether it’s an unhappy childhood or an arachnoid cyst or both. To suggest that criminals could be excused because their brains made them do it seems to imply that anyone whose brain isn’t functioning properly could be absolved of responsibility. But should judges and juries really be in the business of defining the normal or properly working brain? And since all behavior is caused by our brains, wouldn’t this mean all behavior could potentially be excused?

The prosecution at first tried to argue that evidence of Weinstein’s arachnoid cyst shouldn’t be admitted in court. One of the government’s witnesses, a forensic psychologist named Daniel Martell, testified that brain-scanning technologies were new and untested, and their implications weren’t yet widely accepted by the scientific community. Ultimately, on Oct. 8, 1992, Judge Richard Carruthers issued a Solomonic ruling: Weinstein’s lawyers could tell the jury that brain scans had identified an arachnoid cyst, but they couldn’t tell jurors that arachnoid cysts were associated with violence. Even so, the prosecution team seemed to fear that simply exhibiting images of Weinstein’s brain in court would sway the jury. Eleven days later, on the morning of jury selection, they agreed to let Weinstein plead guilty in exchange for a reduced charge of manslaughter.

After the Weinstein case, Daniel Martell found himself in so much demand to testify as a expert witness that he started a consulting business called Forensic Neuroscience. Hired by defense teams and prosecutors alike, he has testified over the past 15 years in several hundred criminal and civil cases. In those cases, neuroscientific evidence has been admitted to show everything from head trauma to the tendency of violent video games to make children behave aggressively. But Martell told me that it’s in death-penalty litigation that neuroscience evidence is having its most revolutionary effect. “Some sort of organic brain defense has become de rigueur in any sort of capital defense,” he said. Lawyers routinely order scans of convicted defendants’ brains and argue that a neurological impairment prevented them from controlling themselves. The prosecution counters that the evidence shouldn’t be admitted, but under the relaxed standards for mitigating evidence during capital sentencing, it usually is. Indeed, a Florida court has held that the failure to admit neuroscience evidence during capital sentencing is grounds for a reversal. Martell remains skeptical about the worth of the brain scans, but he observes that they’ve “revolutionized the law.”

The extent of that revolution is hotly debated, but the influence of what some call neurolaw is clearly growing. Neuroscientific evidence has persuaded jurors to sentence defendants to life imprisonment rather than to death; courts have also admitted brain-imaging evidence during criminal trials to support claims that defendants like John W. Hinckley Jr., who tried to assassinate President Reagan, are insane. Carter Snead, a law professor at Notre Dame, drafted a staff working paper on the impact of neuroscientific evidence in criminal law for President Bush’s Council on Bioethics. The report concludes that neuroimaging evidence is of mixed reliability but “the large number of cases in which such evidence is presented is striking.” That number will no doubt increase substantially. Proponents of neurolaw say that neuroscientific evidence will have a large impact not only on questions of guilt and punishment but also on the detection of lies and hidden bias, and on the prediction of future criminal behavior. At the same time, skeptics fear that the use of brain-scanning technology as a kind of super mind-reading device will threaten our privacy and mental freedom, leading some to call for the legal system to respond with a new concept of “cognitive liberty.”

One of the most enthusiastic proponents of neurolaw is Owen Jones, a professor of law and biology at Vanderbilt. Jones (who happens to have been one of my law-school classmates) has joined a group of prominent neuroscientists and law professors who have applied for a large MacArthur Foundation grant; they hope to study a wide range of neurolaw questions, like: Do sexual offenders and violent teenagers show unusual patterns of brain activity? Is it possible to capture brain images of chronic neck pain when someone claims to have suffered whiplash? In the meantime, Jones is turning Vanderbilt into a kind of Los Alamos for neurolaw. The university has just opened a $27 million neuroimaging center and has poached leading neuroscientists from around the world; soon, Jones hopes to enroll students in the nation’s first program in law and neuroscience. “It’s breathlessly exciting,” he says. “This is the new frontier in law and science — we’re peering into the black box to see how the brain is actually working, that hidden place in the dark quiet, where we have our private thoughts and private reactions — and the law will inevitably have to decide how to deal with this new technology.”

II. A Visit to Vanderbilt Owen Jones is a disciplined and quietly intense man, and his enthusiasm for the transformative power of neuroscience is infectious. With René Marois, a neuroscientist in the psychology department, Jones has begun a study of how the human brain reacts when asked to impose various punishments. Informally, they call the experiment Harm and Punishment — and they offered to make me one of their first subjects.

We met in Jones’s pristine office, which is decorated with a human skull and calipers, like those that phrenologists once used to measure the human head; his father is a dentist, and his grandfather was an electrical engineer who collected tools. We walked over to Vanderbilt’s Institute of Imaging Science, which, although still surrounded by scaffolding, was as impressive as Jones had promised. The basement contains one of the few 7-tesla magnetic-resonance-imaging scanners in the world. For Harm and Punishment, Jones and Marois use a less powerful 3 tesla, which is the typical research M.R.I.

We then made our way to the scanner. After removing all metal objects — including a belt and a stray dry-cleaning tag with a staple — I put on earphones and a helmet that was shaped like a birdcage to hold my head in place. The lab assistant turned off the lights and left the room; I lay down on the gurney and, clutching a panic button, was inserted into the magnet. All was dark except for a screen flashing hypothetical crime scenarios, like this one: “John, who lives at home with his father, decides to kill him for the insurance money. After convincing his father to help with some electrical work in the attic, John arranges for him to be electrocuted. His father survives the electrocution, but he is hospitalized for three days with injuries caused by the electrical shock.” I was told to press buttons indicating the appropriate level of punishment, from 0 to 9, as the magnet recorded my brain activity.

After I spent 45 minutes trying not to move an eyebrow while assigning punishments to dozens of sordid imaginary criminals, Marois told me through the intercom to try another experiment: namely, to think of familiar faces and places in sequence, without telling him whether I was starting with faces or places. I thought of my living room, my wife, my parents’ apartment and my twin sons, trying all the while to avoid improper thoughts for fear they would be discovered. Then the experiments were over, and I stumbled out of the magnet.

The next morning, Owen Jones and I reported to René Marois’s laboratory for the results. Marois’s graduate students, who had been up late analyzing my brain, were smiling broadly. Because I had moved so little in the machine, they explained, my brain activity was easy to read. “Your head movement was incredibly low, and you were the harshest punisher we’ve had,” Josh Buckholtz, one of the grad students, said with a happy laugh. “You were a researcher’s dream come true!” Buckholtz tapped the keyboard, and a high-resolution 3-D image of my brain appeared on the screen in vivid colors. Tiny dots flickered back and forth, showing my eyes moving as they read the lurid criminal scenarios. Although I was only the fifth subject to be put in the scanner, Marois emphasized that my punishment ratings were higher than average. In one case, I assigned a 7 where the average punishment was 4. “You were focusing on the intent, and the others focused on the harm,” Buckholtz said reassuringly.

Marois explained that he and Jones wanted to study the interactions among the emotion-generating regions of the brain, like the amygdala, and the prefrontal regions responsible for reason. “It is also possible that the prefrontal cortex is critical for attributing punishment, making the essential decision about what kind of punishment to assign,” he suggested. Marois stressed that in order to study that possibility, more subjects would have to be put into the magnet. But if the prefrontal cortex does turn out to be critical for selecting among punishments, Jones added, it could be highly relevant for lawyers selecting a jury. For example, he suggested, lawyers might even select jurors for different cases based on their different brain-activity patterns. In a complex insider-trading case, for example, perhaps the defense would “like to have a juror making decisions on maximum deliberation and minimum emotion”; in a government entrapment case, emotional reactions might be more appropriate.

We then turned to the results of the second experiment, in which I had been asked to alternate between thinking of faces and places without disclosing the order. “We think we can guess what you were thinking about, even though you didn’t tell us the order you started with,” Marois said proudly. “We think you started with places and we will prove to you that it wasn’t just luck.” Marois showed me a picture of my parahippocampus, the area of the brain that responds strongly to places and the recognition of scenes. “It’s lighting up like Christmas on all cylinders,” Marois said. “It worked beautifully, even though we haven’t tried this before here.”

He then showed a picture of the fusiform area, which is responsible for facial recognition. It, too, lighted up every time I thought of a face. “This is a potentially very serious legal implication,” Jones broke in, since the technology allows us to tell what people are thinking about even if they deny it. He pointed to a series of practical applications. Because subconscious memories of faces and places may be more reliable than conscious memories, witness lineups could be transformed. A child who claimed to have been victimized by a stranger, moreover, could be shown pictures of the faces of suspects to see which one lighted up the face-recognition area in ways suggesting familiarity.

Jones and Marois talked excitedly about the implications of their experiments for the legal system. If they discovered a significant gap between people’s hard-wired sense of how severely certain crimes should be punished and the actual punishments assigned by law, federal sentencing guidelines might be revised, on the principle that the law shouldn’t diverge too far from deeply shared beliefs. Experiments might help to develop a deeper understanding of the criminal brain, or of the typical brain predisposed to criminal activity.

III. The End of Responsibility? Indeed, as the use of functional M.R.I. results becomes increasingly common in courtrooms, judges and juries may be asked to draw new and sometimes troubling lines between “normal” and “abnormal” brains. Ruben Gur, a professor of psychology at the University of Pennsylvania School of Medicine, specializes in doing just that. Gur began his expert-witness career in the mid-1990s when a colleague asked him to help in the trial of a convicted serial killer in Florida named Bobby Joe Long. Known as the “classified-ad rapist,” because he would respond to classified ads placed by women offering to sell household items, then rape and kill them, Long was sentenced to death after he committed at least nine murders in Tampa. Gur was called as a national expert in positron-emission tomography, or PET scans, in which patients are injected with a solution containing radioactive markers that illuminate their brain activity. After examining Long’s PET scans, Gur testified that a motorcycle accident that had left Long in a coma had also severely damaged his amygdala. It was after emerging from the coma that Long committed his first rape.

“I didn’t have the sense that my testimony had a profound impact,” Gur told me recently — Long is still filing appeals — but he has testified at more than 20 capital cases since then. He wrote a widely circulated affidavit arguing that adolescents are not as capable of controlling their impulses as adults because the development of neurons in the prefrontal cortex isn’t complete until the early 20s. Based on that affidavit, Gur was asked to contribute to the preparation of one of the briefs filed by neuroscientists and others in Roper v. Simmons, the landmark case in which a divided Supreme Court struck down the death penalty for offenders who committed crimes when they were under the age of 18.

The leading neurolaw brief in the case, filed by the American Medical Association and other groups, argued that because “adolescent brains are not fully developed” in the prefrontal regions, adolescents are less able than adults to control their impulses and should not be held fully accountable “for the immaturity of their neural anatomy.” In his majority decision, Justice Anthony Kennedy declared that “as any parent knows and as the scientific and sociological studies” cited in the briefs “tend to confirm, ‘[a] lack of maturity and an underdeveloped sense of responsibility are found in youth more often than in adults.’ ” Although Kennedy did not cite the neuroscience evidence specifically, his indirect reference to the scientific studies in the briefs led some supporters and critics to view the decision as the Brown v. Board of Education of neurolaw.

One important question raised by the Roper case was the question of where to draw the line in considering neuroscience evidence as a legal mitigation or excuse. Should courts be in the business of deciding when to mitigate someone’s criminal responsibility because his brain functions improperly, whether because of age, in-born defects or trauma? As we learn more about criminals’ brains, will we have to redefine our most basic ideas of justice?

Two of the most ardent supporters of the claim that neuroscience requires the redefinition of guilt and punishment are Joshua D. Greene, an assistant professor of psychology at Harvard, and Jonathan D. Cohen, a professor of psychology who directs the neuroscience program at Princeton. Greene got Cohen interested in the legal implications of neuroscience, and together they conducted a series of experiments exploring how people’s brains react to moral dilemmas involving life and death. In particular, they wanted to test people’s responses in the f.M.R.I. scanner to variations of the famous trolley problem, which philosophers have been arguing about for decades.

The trolley problem goes something like this: Imagine a train heading toward five people who are going to die if you don’t do anything. If you hit a switch, the train veers onto a side track and kills another person. Most people confronted with this scenario say it’s O.K. to hit the switch. By contrast, imagine that you’re standing on a footbridge that spans the train tracks, and the only way you can save the five people is to push an obese man standing next to you off the footbridge so that his body stops the train. Under these circumstances, most people say it’s not O.K. to kill one person to save five.

“I wondered why people have such clear intuitions,” Greene told me, “and the core idea was to confront people with these two cases in the scanner and see if we got more of an emotional response in one case and reasoned response in the other.” As it turns out, that’s precisely what happened: Greene and Cohen found that the brain region associated with deliberate problem solving and self-control, the dorsolateral prefrontal cortex, was especially active when subjects confronted the first trolley hypothetical, in which most of them made a utilitarian judgment about how to save the greatest number of lives. By contrast, emotional centers in the brain were more active when subjects confronted the second trolley hypothetical, in which they tended to recoil at the idea of personally harming an individual, even under such wrenching circumstances. “This suggests that moral judgment is not a single thing; it’s intuitive emotional responses and then cognitive responses that are duking it out,” Greene said.

“To a neuroscientist, you are your brain; nothing causes your behavior other than the operations of your brain,” Greene says. “If that’s right, it radically changes the way we think about the law. The official line in the law is all that matters is whether you’re rational, but you can have someone who is totally rational but whose strings are being pulled by something beyond his control.” In other words, even someone who has the illusion of making a free and rational choice between soup and salad may be deluding himself, since the choice of salad over soup is ultimately predestined by forces hard-wired in his brain. Greene insists that this insight means that the criminal-justice system should abandon the idea of retribution — the idea that bad people should be punished because they have freely chosen to act immorally — which has been the focus of American criminal law since the 1970s, when rehabilitation went out of fashion. Instead, Greene says, the law should focus on deterring future harms. In some cases, he supposes, this might mean lighter punishments. “If it’s really true that we don’t get any prevention bang from our punishment buck when we punish that person, then it’s not worth punishing that person,” he says. (On the other hand, Carter Snead, the Notre Dame scholar, maintains that capital defendants who are not considered fully blameworthy under current rules could be executed more readily under a system that focused on preventing future harms.)

Image
Credit...Brendan Monroe

Others agree with Greene and Cohen that the legal system should be radically refocused on deterrence rather than on retribution. Since the celebrated M’Naughten case in 1843, involving a paranoid British assassin, English and American courts have recognized an insanity defense only for those who are unable to appreciate the difference between right and wrong. (This is consistent with the idea that only rational people can be held criminally responsible for their actions.) According to some neuroscientists, that rule makes no sense in light of recent brain-imaging studies. “You can have a horrendously damaged brain where someone knows the difference between right and wrong but nonetheless can’t control their behavior,” says Robert Sapolsky, a neurobiologist at Stanford. “At that point, you’re dealing with a broken machine, and concepts like punishment and evil and sin become utterly irrelevant. Does that mean the person should be dumped back on the street? Absolutely not. You have a car with the brakes not working, and it shouldn’t be allowed to be near anyone it can hurt.”

Even as these debates continue, some skeptics contend that both the hopes and fears attached to neurolaw are overblown. “There’s nothing new about the neuroscience ideas of responsibility; it’s just another material, causal explanation of human behavior,” says Stephen J. Morse, professor of law and psychiatry at the University of Pennsylvania. “How is this different than the Chicago school of sociology,” which tried to explain human behavior in terms of environment and social structures? “How is it different from genetic explanations or psychological explanations? The only thing different about neuroscience is that we have prettier pictures and it appears more scientific.”

Morse insists that “brains do not commit crimes; people commit crimes” — a conclusion he suggests has been ignored by advocates who, “infected and inflamed by stunning advances in our understanding of the brain . . . all too often make moral and legal claims that the new neuroscience . . . cannot sustain.” He calls this “brain overclaim syndrome” and cites as an example the neuroscience briefs filed in the Supreme Court case Roper v. Simmons to question the juvenile death penalty. “What did the neuroscience add?” he asks. If adolescent brains caused all adolescent behavior, “we would expect the rates of homicide to be the same for 16- and 17-year-olds everywhere in the world — their brains are alike — but in fact, the homicide rates of Danish and Finnish youths are very different than American youths.” Morse agrees that our brains bring about our behavior — “I’m a thoroughgoing materialist, who believes that all mental and behavioral activity is the causal product of physical events in the brain” — but he disagrees that the law should excuse certain kinds of criminal conduct as a result. “It’s a total non sequitur,” he says. “So what if there’s biological causation? Causation can’t be an excuse for someone who believes that responsibility is possible. Since all behavior is caused, this would mean all behavior has to be excused.” Morse cites the case of Charles Whitman, a man who, in 1966, killed his wife and his mother, then climbed up a tower at the University of Texas and shot and killed 13 more people before being shot by police officers. Whitman was discovered after an autopsy to have a tumor that was putting pressure on his amygdala. “Even if his amygdala made him more angry and volatile, since when are anger and volatility excusing conditions?” Morse asks. “Some people are angry because they had bad mommies and daddies and others because their amygdalas are mucked up. The question is: When should anger be an excusing condition?”

Still, Morse concedes that there are circumstances under which new discoveries from neuroscience could challenge the legal system at its core. “Suppose neuroscience could reveal that reason actually plays no role in determining human behavior,” he suggests tantalizingly. “Suppose I could show you that your intentions and your reasons for your actions are post hoc rationalizations that somehow your brain generates to explain to you what your brain has already done” without your conscious participation. If neuroscience could reveal us to be automatons in this respect, Morse is prepared to agree with Greene and Cohen that criminal law would have to abandon its current ideas about responsibility and seek other ways of protecting society.

Some scientists are already pushing in this direction. In a series of famous experiments in the 1970s and ’80s, Benjamin Libet measured people’s brain activity while telling them to move their fingers whenever they felt like it. Libet detected brain activity suggesting a readiness to move the finger half a second before the actual movement and about 400 milliseconds before people became aware of their conscious intention to move their finger. Libet argued that this leaves 100 milliseconds for the conscious self to veto the brain’s unconscious decision, or to give way to it — suggesting, in the words of the neuroscientist Vilayanur S. Ramachandran, that we have not free will but “free won’t.”

Morse is not convinced that the Libet experiments reveal us to be helpless automatons. But he does think that the study of our decision-making powers could bear some fruit for the law. “I’m interested,” he says, “in people who suffer from drug addictions, psychopaths and people who have intermittent explosive disorder — that’s people who have no general rationality problem other than they just go off.” In other words, Morse wants to identify the neural triggers that make people go postal. “Suppose we could show that the higher deliberative centers in the brain seem to be disabled in these cases,” he says. “If these are people who cannot control episodes of gross irrationality, we’ve learned something that might be relevant to the legal ascription of responsibility.” That doesn’t mean they would be let off the hook, he emphasizes: “You could give people a prison sentence and an opportunity to get fixed.”

IV. Putting the Unconscious on Trial If debates over criminal responsibility long predate the f.M.R.I., so do debates over the use of lie-detection technology. What’s new is the prospect that lie detectors in the courtroom will become much more accurate, and correspondingly more intrusive. There are, at the moment, two lie-detection technologies that rely on neuroimaging, although the value and accuracy of both are sharply contested. The first, developed by Lawrence Farwell in the 1980s, is known as “brain fingerprinting.” Subjects put on an electrode-filled helmet that measures a brain wave called p300, which, according to Farwell, changes its frequency when people recognize images, pictures, sights and smells. After showing a suspect pictures of familiar places and measuring his p300 activation patterns, government officials could, at least in theory, show a suspect pictures of places he may or may not have seen before — a Qaeda training camp, for example, or a crime scene — and compare the activation patterns. (By detecting not only lies but also honest cases of forgetfulness, the technology could expand our very idea of lie detection.)

The second lie-detection technology uses f.M.R.I. machines to compare the brain activity of liars and truth tellers. It is based on a test called Guilty Knowledge, developed by Daniel Langleben at the University of Pennsylvania in 2001. Langleben gave subjects a playing card before they entered the magnet and told them to answer no to a series of questions, including whether they had the card in question. Langleben and his colleagues found that certain areas of the brain lighted up when people lied.

Two companies, No Lie MRI and Cephos, are now competing to refine f.M.R.I. lie-detection technology so that it can be admitted in court and commercially marketed. I talked to Steven Laken, the president of Cephos, which plans to begin selling its products this year. “We have two to three people who call every single week,” he told me. “They’re in legal proceedings throughout the world, and they’re looking to bolster their credibility.” Laken said the technology could have “tremendous applications” in civil and criminal cases. On the government side, he said, the technology could replace highly inaccurate polygraphs in screening for security clearances, as well as in trying to identify suspected terrorists’ native languages and close associates. “In lab studies, we’ve been in the 80- to 90-percent-accuracy range,” Laken says. This is similar to the accuracy rate for polygraphs, which are not considered sufficiently reliable to be allowed in most legal cases. Laken says he hopes to reach the 90-percent- to 95-percent-accuracy range — which should be high enough to satisfy the Supreme Court’s standards for the admission of scientific evidence. Judy Illes, director of Neuroethics at the Stanford Center for Biomedical Ethics, says, “I would predict that within five years, we will have technology that is sufficiently reliable at getting at the binary question of whether someone is lying that it may be utilized in certain legal settings.”

If and when lie-detection f.M.R.I.’s are admitted in court, they will raise vexing questions of self-incrimination and privacy. Hank Greely, a law professor and head of the Stanford Center for Law and the Biosciences, notes that prosecution and defense witnesses might have their credibility questioned if they refused to take a lie-detection f.M.R.I., as might parties and witnesses in civil cases. Unless courts found the tests to be shocking invasions of privacy, like stomach pumps, witnesses could even be compelled to have their brains scanned. And equally vexing legal questions might arise as neuroimaging technologies move beyond telling whether or not someone is lying and begin to identify the actual content of memories. Michael Gazzaniga, a professor of psychology at the University of California, Santa Barbara, and author of “The Ethical Brain,” notes that within 10 years, neuroscientists may be able to show that there are neurological differences when people testify about their own previous acts and when they testify to something they saw. “If you kill someone, you have a procedural memory of that, whereas if I’m standing and watch you kill somebody, that’s an episodic memory that uses a different part of the brain,” he told me. Even if witnesses don’t have their brains scanned, neuroscience may lead judges and jurors to conclude that certain kinds of memories are more reliable than others because of the area of the brain in which they are processed. Further into the future, and closer to science fiction, lies the possibility of memory downloading. “One could even, just barely, imagine a technology that might be able to ‘read out’ the witness’s memories, intercepted as neuronal firings, and translate it directly into voice, text or the equivalent of a movie,” Hank Greely writes.

Greely acknowledges that lie-detection and memory-retrieval technologies like this could pose a serious challenge to our freedom of thought, which is now defended largely by the First Amendment protections for freedom of expression. “Freedom of thought has always been buttressed by the reality that you could only tell what someone thought based on their behavior,” he told me. “This technology holds out the possibility of looking through the skull and seeing what’s really happening, seeing the thoughts themselves.” According to Greely, this may challenge the principle that we should be held accountable for what we do, not what we think. “It opens up for the first time the possibility of punishing people for their thoughts rather than their actions,” he says. “One reason thought has been free in the harshest dictatorships is that dictators haven’t been able to detect it.” He adds, “Now they may be able to, putting greater pressure on legal constraints against government interference with freedom of thought.”

In the future, neuroscience could also revolutionize the way jurors are selected. Steven Laken, the president of Cephos, says that jury consultants might seek to put prospective jurors in f.M.R.I.’s. “You could give videotapes of the lawyers and witnesses to people when they’re in the magnet and see what parts of their brains light up,” he says. A situation like this would raise vexing questions about jurors’ prejudices — and what makes for a fair trial. Recent experiments have suggested that people who believe themselves to be free of bias may harbor plenty of it all the same.

The experiments, conducted by Elizabeth Phelps, who teaches psychology at New York University, combine brain scans with a behavioral test known as the Implicit Association Test, or I.A.T., as well as physiological tests of the startle reflex. The I.A.T. flashes pictures of black and white faces at you and asks you to associate various adjectives with the faces. Repeated tests have shown that white subjects take longer to respond when they’re asked to associate black faces with positive adjectives and white faces with negative adjectives than vice versa, and this is said to be an implicit measure of unconscious racism. Phelps and her colleagues added neurological evidence to this insight by scanning the brains and testing the startle reflexes of white undergraduates at Yale before they took the I.A.T. She found that the subjects who showed the most unconscious bias on the I.A.T. also had the highest activation in their amygdalas — a center of threat perception — when unfamiliar black faces were flashed at them in the scanner. By contrast, when subjects were shown pictures of familiar black and white figures — like Denzel Washington, Martin Luther King Jr. and Conan O’Brien — there was no jump in amygdala activity.

The legal implications of the new experiments involving bias and neuroscience are hotly disputed. Mahzarin R. Banaji, a psychology professor at Harvard who helped to pioneer the I.A.T., has argued that there may be a big gap between the concept of intentional bias embedded in law and the reality of unconscious racism revealed by science. When the gap is “substantial,” she and the U.C.L.A. law professor Jerry Kang have argued, “the law should be changed to comport with science” — relaxing, for example, the current focus on intentional discrimination and trying to root out unconscious bias in the workplace with “structural interventions,” which critics say may be tantamount to racial quotas. One legal scholar has cited Phelps’s work to argue for the elimination of peremptory challenges to prospective jurors — if most whites are unconsciously racist, the argument goes, then any decision to strike a black juror must be infected with racism. Much to her displeasure, Phelps’s work has been cited by a journalist to suggest that a white cop who accidentally shot a black teenager on a Brooklyn rooftop in 2004 must have been responding to a hard-wired fear of unfamiliar black faces — a version of the amygdala made me do it.

Phelps herself says it’s “crazy” to link her work to cops who shoot on the job and insists that it is too early to use her research in the courtroom. “Part of my discomfort is that we haven’t linked what we see in the amygdala or any other region of the brain with an activity outside the magnet that we would call racism,” she told me. “We have no evidence whatsoever that activity in the brain is more predictive of things we care about in the courtroom than the behaviors themselves that we correlate with brain function.” In other words, just because you have a biased reaction to a photograph doesn’t mean you’ll act on those biases in the workplace. Phelps is also concerned that jurors might be unduly influenced by attention-grabbing pictures of brain scans. “Frank Keil, a psychologist at Yale, has done research suggesting that when you have a picture of a mechanism, you have a tendency to overestimate how much you understand the mechanism,” she told me. Defense lawyers confirm this phenomenon. “Here was this nice color image we could enlarge, that the medical expert could point to,” Christopher Plourd, a San Diego criminal defense lawyer, told The Los Angeles Times in the early 1990s. “It documented that this guy had a rotten spot in his brain. The jury glommed onto that.”

Other scholars are even sharper critics of efforts to use scientific experiments about unconscious bias to transform the law. “I regard that as an extraordinary claim that you could screen potential jurors or judges for bias; it’s mind-boggling,” I was told by Philip Tetlock, professor at the Haas School of Business at the University of California at Berkley. Tetlock has argued that split-second associations between images of African-Americans and negative adjectives may reflect “simple awareness of the social reality” that “some groups are more disadvantaged than others.” He has also written that, according to psychologists, “there is virtually no published research showing a systematic link between racist attitudes, overt or subconscious, and real-world discrimination.” (A few studies show, Tetlock acknowledges, that openly biased white people sometimes sit closer to whites than blacks in experiments that simulate job hiring and promotion.) “A light bulb going off in your brain means nothing unless it’s correlated with a particular output, and the brain-scan stuff, heaven help us, we have barely linked that with anything,” agrees Tetlock’s co-author, Amy Wax of the University of Pennsylvania Law School. “The claim that homeless people light up your amygdala more and your frontal cortex less and we can infer that you will systematically dehumanize homeless people — that’s piffle.”

V. Are You Responsible for What You Might Do? The attempt to link unconscious bias to actual acts of discrimination may be dubious. But are there other ways to look inside the brain and make predictions about an individual’s future behavior? And if so, should those discoveries be employed to make us safer? Efforts to use science to predict criminal behavior have a disreputable history. In the 19th century, the Italian criminologist Cesare Lombroso championed a theory of “biological criminality,” which held that criminals could be identified by physical characteristics, like large jaws or bushy eyebrows. Nevertheless, neuroscientists are trying to find the factors in the brain associated with violence. PET scans of convicted murderers were first studied in the late 1980s by Adrian Raine, a professor of psychology at the University of Southern California; he found that their prefrontal cortexes, areas associated with inhibition, had reduced glucose metabolism and suggested that this might be responsible for their violent behavior. In a later study, Raine found that subjects who received a diagnosis of antisocial personality disorder, which correlates with violent behavior, had 11 percent less gray matter in their prefrontal cortexes than control groups of healthy subjects and substance abusers. His current research uses f.M.R.I.’s to study moral decision-making in psychopaths.

Neuroscience, it seems, points two ways: it can absolve individuals of responsibility for acts they’ve committed, but it can also place individuals in jeopardy for acts they haven’t committed — but might someday. “This opens up a Pandora’s box in civilized society that I’m willing to fight against,” says Helen S. Mayberg, a professor of psychiatry, behavioral sciences and neurology at Emory University School of Medicine, who has testified against the admission of neuroscience evidence in criminal trials. “If you believe at the time of trial that the picture informs us about what they were like at the time of the crime, then the picture moves forward. You need to be prepared for: ‘This spot is a sign of future dangerousness,’ when someone is up for parole. They have a scan, the spot is there, so they don’t get out. It’s carved in your brain.”

Other scholars see little wrong with using brain scans to predict violent tendencies and sexual predilections — as long as the scans are used within limits. “It’s not necessarily the case that if predictions work, you would say take that guy off the street and throw away the key,” says Hank Greely, the Stanford law professor. “You could require counseling, surveillance, G.P.S. transmitters or warning the neighbors. None of these are necessarily benign, but they beat the heck out of preventative detention.” Greely has little doubt that predictive technologies will be enlisted in the war on terror — perhaps in radical ways. “Even with today’s knowledge, I think we can tell whether someone has a strong emotional reaction to seeing things, and I can certainly imagine a friend-versus-foe scanner. If you put everyone who reacts badly to an American flag in a concentration camp or Guantánamo, that would be bad, but in an occupation situation, to mark someone down for further surveillance, that might be appropriate.”

Paul Root Wolpe, who teaches social psychiatry and psychiatric ethics at the University of Pennsylvania School of Medicine, says he anticipates that neuroscience predictions will move beyond the courtroom and will be used to make predictions about citizens in all walks of life.

“Will we use brain imaging to track kids in school because we’ve discovered that certain brain function or morphology suggests aptitude?” he asks. “I work for NASA, and imagine how helpful it might be for NASA if it could scan your brain to discover whether you have a good enough spatial sense to be a pilot.” Wolpe says that brain imaging might eventually be used to decide if someone is a worthy foster or adoptive parent — a history of major depression and cocaine abuse can leave telltale signs on the brain, for example, and future studies might find parts of the brain that correspond to nurturing and caring.

The idea of holding people accountable for their predispositions rather than their actions poses a challenge to one of the central principles of Anglo-American jurisprudence: namely, that people are responsible for their behavior, not their proclivities — for what they do, not what they think. “We’re going to have to make a decision about the skull as a privacy domain,” Wolpe says. Indeed, Wolpe serves on the board of an organization called the Center for Cognitive Liberty and Ethics, a group of neuroscientists, legal scholars and privacy advocates “dedicated to protecting and advancing freedom of thought in the modern world of accelerating neurotechnologies.”

There may be similar “cognitive liberty” battles over efforts to repair or enhance broken brains. A remarkable technique called transcranial magnetic stimulation, for example, has been used to stimulate or inhibit specific regions of the brain. It can temporarily alter how we think and feel. Using T.M.S., Ernst Fehr and Daria Knoch of the University of Zurich temporarily disrupted each side of the dorsolateral prefrontal cortex in test subjects. They asked their subjects to participate in an experiment that economists call the ultimatum game. One person is given $20 and told to divide it with a partner. If the partner rejects the proposed amount as too low, neither person gets any money. Subjects whose prefrontal cortexes were functioning properly tended to reject offers of $4 or less: they would rather get no money than accept an offer that struck them as insulting and unfair. But subjects whose right prefrontal cortexes were suppressed by T.M.S. tended to accept the $4 offer. Although the offer still struck them as insulting, they were able to suppress their indignation and to pursue the selfishly rational conclusion that a low offer is better than nothing.

Some neuroscientists believe that T.M.S. may be used in the future to enforce a vision of therapeutic justice, based on the idea that defective brains can be cured. “Maybe somewhere down the line, a badly damaged brain would be viewed as something that can heal, like a broken leg that needs to be repaired,” the neurobiologist Robert Sapolsky says, although he acknowledges that defining what counts as a normal brain is politically and scientifically fraught. Indeed, efforts to identify normal and abnormal brains have been responsible for some of the darkest movements in the history of science and technology, from phrenology to eugenics. “How far are we willing to go to use neurotechnology to change people’s brains we consider disordered?” Wolpe asks. “We might find a part of the brain that seems to be malfunctioning, like a discrete part of the brain operative in violent or sexually predatory behavior, and then turn off or inhibit that behavior using transcranial magnetic stimulation.” Even behaviors in the normal range might be fine-tuned by T.M.S.: jurors, for example, could be made more emotional or more deliberative with magnetic interventions. Mark George, an adviser to the Cephos company and also director of the Medical University of South Carolina Center for Advanced Imaging Research, has submitted a patent application for a T.M.S. procedure that supposedly suppresses the area of the brain involved in lying and makes a person less capable of not telling the truth.

As the new technologies proliferate, even the neurolaw experts themselves have only begun to think about the questions that lie ahead. Can the police get a search warrant for someone’s brain? Should the Fourth Amendment protect our minds in the same way that it protects our houses? Can courts order tests of suspects’ memories to determine whether they are gang members or police informers, or would this violate the Fifth Amendment’s ban on compulsory self-incrimination? Would punishing people for their thoughts rather than for their actions violate the Eighth Amendment’s ban on cruel and unusual punishment? However astonishing our machines may become, they cannot tell us how to answer these perplexing questions. We must instead look to our own powers of reasoning and intuition, relatively primitive as they may be. As Stephen Morse puts it, neuroscience itself can never identify the mysterious point at which people should be excused from responsibility for their actions because they are not able, in some sense, to control themselves. That question, he suggests, is “moral and ultimately legal,” and it must be answered not in laboratories but in courtrooms and legislatures. In other words, we must answer it ourselves.

Jeffrey Rosen, a frequent contributor, is the author most recently of “The Supreme Court: The Personalities and Rivalries That Defined America.”

Advertisement

SKIP ADVERTISEMENT