Nonprofit, nonpartisan journalism. Supported by readers.


UCare generously supports MinnPost’s Second Opinion coverage; learn why.

Poor psychological science is rarely challenged in courtrooms, study finds

The study found that 60 percent of the psychological tests and assessment tools used in courtrooms are not favorably rated by experts and thus scientifically suspect.

Photo by Bill Oxford on Unsplash
Legal challenges to psychological evidence occurs in only one in 20 court cases.
The quality of the psychological scientific evidence used to win over juries and judges in U.S. courtrooms varies widely and is often unreliable, according to a study published in the journal Psychological Science in the Public Interest.

Specifically, the study found that 60 percent of the psychological tests and assessment tools used in courtrooms are not favorably rated by experts and thus scientifically suspect.

Yet, legal challenges to psychological evidence occur in only one in 20 court cases, and such challenges succeed only about a third of the time, the study also found.

These findings are troubling. Psychological tests, tools and assessments are used in a wide range of legal cases, whether it be judging parental fitness for custody purposes, assessing the validity of eye-witness accounts, deciding eligibility for disability benefits, or assessing the competence of criminal defendants to stand trial.

Article continues after advertisement

“Most dramatically, intelligence tests have become all but dispositive in determining whether a person should be sentenced to death under the Supreme Court’s case law exempting people with intellectual disability from the death penalty,” the study’s authors write.

“One might think that, given the stakes involved, the validity of such tests would always be carefully examined,” they add. “That is not, however, always what happens.”

“Although some psychological assessments used in court have strong scientific validity, many do not,” says Tess M.S. Neal, the study’s lead author and a professor of social and behavioral sciences at the University of Arizona, in a released statement. “Unfortunately, the courts do not appear to be calibrated to the strength of the psychological assessment evidence.”

A study in two parts

For the study, Neal and her colleagues examined 364 psychological assessment tools that psychologists who serve as forensic experts have reported that they use in legal cases. They then analyzed those tools from both a scientific and legal perspective.

The researchers found that although 90 percent (326) of the psychological tools have undergone scientific testing, only 67 percent had been reviewed in the field’s most prominent journals and manuals. And of those, only 40 percent had received generally favorable reviews.

Almost a quarter of the tools were determined by the reviewers to be unreliable.

Neal and her colleagues then looked closely at 372 state and federal court cases (from the years 2016-2018) in which 30 different psychological assessment tools were used as evidence. Despite 60 percent of those tools failing to have the support of the scientific community, their admissibility in court was challenged only 19 times.

That meant that in almost 95 percent of the cases, questionable scientific evidence went unchallenged.

Article continues after advertisement

And only a third of those challenges were successful.

“Attorneys rarely challenge psychological expert assessment evidence, and when they do, judges often fail to exercise the scrutiny required by law,” Neal and her colleagues point out.

Possible explanations 

Several reasons may explain why this situation occurs in courtrooms. One has to do with lack of expertise among lawyers. As the researchers note, “Lawyers are generally not trained in how to analyze the validity of a psychological tool. Rather, they are likely to defer to what experts tell them.”

The issue of legal precedent is another impediment. “If lawyers and experts have always used a particular tool without challenge, then a new challenge is not likely forthcoming,” the researchers explain.

Psychological testing is also big business, “and some test publishers — some of them million- and billion-dollar companies publicly traded on the stock exchange — look to maximize profit,” write Neal and her colleagues.

The publishers “sell thousands of psychological assessment tools, many of which are revised and republished with updated version over time,” they add. “Many of these tools are sold for hundreds of dollars, and usually there are recurring per-use costs for items such as answer sheets, record forms, or online administration and scoring programs.”

Closer scrutiny needed

In their paper, Neal and her colleagues provide specific recommendations for how professionals and the public can make sure that psychological evidence presented in court is reliable.

“We suggest that before using a psychological test in a legal setting, psychologists ensure its psychometric and context-relevant validation studies have survived scientific peer review through an academic journal, ideally before publication in a manual,” said Neal in an interview with Rick Nauert, an editor for the online news site Psych Central.

“For lawyers and judges, the methods of psychologist expert witnesses can and should be scrutinized, and we give specific suggestions for how to do so,” she added.

FMI: The study can be read in full on the Psychological Science in the Public Interest website.