At first glance, it seems ridiculous to ask whether neurological evidence has a place in our legal system. How could evidence from technologies that seek to advance scientific objectivity be denied a place at the table of reason? Unfortunately, the answer to the first question cannot be reduced to a simple yes or no. There are multiple ways in which neuroscience can supplement court discussion. Three questions will be asked of neurotechnological evidence: is the evidence objective, is it subject to bias from both judges and jurors alike, and does its implementation violate a suspect’s individual freedom.
Despite their flaws, I remain grateful to the Founding Fathers for ultimately producing the Bill of Rights, which lends rights to the accused. However, it is to be expected that a document nearing 230-years of age has struggled to cope with an increasingly automated world. Legal scholars on opposing sides of the political spectrum have quarreled over how to interpret the constitution in relation to technological advancement. Moreover, one can appreciate that discussions involving technologies which threaten the emergence of an Orwellian dystopia are seldom brief. Unfortunately, scientists are usually confronted with a dilemma in which novel technologies are often subject to misappropriation. Of course, who can blame scientists for wanting to modernize their research; that’s their job. But, the repurposing of neurotechnologies that we are witnessing highlights the fact that the social implications of these advancements have not been appropriately considered.
It should be no surprise that the FBI and CIA collaborated with researchers to develop a reliable equivalent to the polygraph. One neuroscientist, Dr. Larry Farwell, created a technology capable of adapting electroencephalography (EEG) data to confirm or reject an alibi. In fact, he has boasted an accuracy of 99.9% on his “brain fingerprinting” technology. We should take that number with a grain of salt though, since independent researchers are unable to verify it given that his technology is both patented and private (Pallarés-Dominguez, 2015).
There are additional caveats to Dr. Farwell’s program. The jury is still out on whether the science behind the technology—detecting a brain wave signal consistent with memory retrieval after the presentation of a visual stimulus—is sufficient enough to incriminate a suspect given that memory distortions and imperfections due to drugs, stress, and time are common (Lacy, 2013). Furthermore, the images presented to the suspect must be sensitive enough such that they are not only specific to the crime, but unique to the suspects presumed experience. For instance, the suspect cannot be shown images used in media coverage of the crime (Stoller, 2007). Nevertheless, these tales of caution have not prevented the use of this technology in the court setting. In two separate US court cases, “brain fingerprints” were the final pieces of evidence used in both exonerating and implicating a suspect.
Another type of evidence which has debuted in court are structural MRI scans which yield high resolution images of the brain. In use for decades in both research and hospital settings, the data produced is highly reliable. So, how could the application of an accepted technology be met with criticism? The answer lies in both the potential socioscientific biases of judges and jurors as well as the partiality of expert witnesses.
Because expert witnesses are called upon by either the plaintiff or the defendant’s legal team, the statements which they will impart are vetted and in effect biased. Even if the opposing team can question the witness themselves or call upon their own expert witness, this often results in the disregarding of the evidence entirely. Thus, it is easy to see how different interpretations of an MRI scan can result in its dismissal, a phenomenon which has already been observed in US courts (Pallarés-Dominguez, 2015). A potential solution this problem has been demonstrated in Germany, where expert witnesses are neutral and called upon by the judge if the evidence present could confuse the jury (Müller, 2014).
This isn’t a perfect solution, however, since we’ve hinted at the existence of socioscientific biases. As one paper points out, sentence length was found to be differentially affected across gender in scenarios where psychiatric and/or neurobiological evidence was shown. In fact, male judges demonstrated a disregard for psychiatric evidence, while female judges were indifferent to neurobiological evidence. It would be problematic indeed for a suspect to have significant psychiatric evidence at his disposal coupled with the misfortune of a male judge. The conclusions of the author – which I agree with – asserts that the observed “genderization of neuroscience” is determined by sociocultural norms (Holtzman, 2016). Similarly, gender-based juror biases have been proposed. To test this, one study subjected participants to mock trials and supplied either psychiatric evidence and/or neurological explanations with or without expert explanations. It was found that the severity of the sentences they commuted to their suspects did not significantly vary across all groups (Klaming, 2011). Given the lack of consensus, more research is required.
Given the political climate, the American people are increasingly skeptical towards governmental power. As a result, there has been increased discussion regarding the constitutional implications of neurotechnologies such as Dr. Farwell’s “brain fingerprinting”. One potential concern involves a potential violation of the Fifth Amendment, which ensures the freedom from self-incrimination. The crux of the question lies in how the Fifth Amendment is interpreted: is it to protect mental privacy or to prevent coerced testimony? How this question is resolved heavily determines how neurotechnologies can be used to obtain witness testimony without violating the freedom from self-incrimination.
If the amendment concerns itself with preventing coercion, one needs to ask whether a “brain fingerprint” constitutes as an act of communication. If we believe that it doesn’t, then using this technology is no different than obtaining a blood sample, since physical evidence (e.g. a neural event) is not privileged by the self-incrimination clause. This scenario therefore permits the coexistence of neurotechnology within the U.S. federal court system (Fox, 2009). On the other hand, if we instead believed that the amendment protected mental privacy, the testimony provided by the “brain fingerprint” would be, as one Supreme Court justice suggested, the “contents of one’s mind”, and that obtaining it from a suspect against his will would be coerced testimony. By defining the self as a mental state which emerges from some physiological/neural event, some scholars have interpreted “brain fingerprinting” as a violation of the fifth and fourth amendment (Stoller, 2007; Kilbride, 2015).
Until the discussion regarding the role of neurotechnology in legal practice is formally dealt with, it should be restricted to supplemental evidence (e.g. MRI scans). Over time, technologies will likely become more efficient, reliable, and accessible to law enforcement. Until then, constitutional questions must be resolved before these technologies also find themselves presented in the court of law.
We just sent you an email. Please click the link in the email to confirm your subscription!
OKSubscriptions powered by Strikingly