Nonprofit, nonpartisan journalism. Supported by readers.

UCare generously supports MinnPost’s Second Opinion coverage; learn why.

New questions about the integrity of psychological research

The integrity of psychological research (like medical research) has come increasingly under fire.
Earlier this year, a scandal erupted about the work of a prominent and extensively published psychologist, Diederik Stapel, most recently of Tilburg U

Questionable research practices “are worryingly widespread among U.S. psychologists.”
Questionable research practices “are worryingly widespread among U.S. psychologists.”

The integrity of psychological research (like medical research) has come increasingly under fire.

Earlier this year, a scandal erupted about the work of a prominent and extensively published psychologist, Diederik Stapel, most recently of Tilburg University in the Netherlands. He was found to have committed widespread academic fraud, which called into question his well-publicized findings on a variety of psychological topics, including racial stereotyping and advertising and identity.

Just this year he received a lot of press for his findings that claimed a messy environment led white people to discriminate more against black people and a diet high in red meat made people more selfish. (In the wake of the fraud charges again Stapel, Science magazine issued “an editorial expression of concern” last month regarding the racial stereotyping article.)

Stapel’s fraud apparently goes back at least a decade. How was he able to get away with it for so long? After all, his research was published in some of the world’s leading journals. An interim report on the scandal by Tilburg University officials offers a not-too-pleasant explanation of how he did it. Here’s a summary of the report’s findings (the original appears to have been published only in Dutch) from a news item posted earlier this week on the British Psychological Society’s (BPS) website:

Article continues after advertisement

According to the [Tilburg University report], Stapel’s “cunning, simple system” at Tilburg and earlier at Groningen University was to form intense one-on-one relationships with students and other researchers, to discuss hypotheses and methodologies with them at length, to prepare together the necessary materials, but to do all the apparent research collection himself at local schools. In many instances, the research never took place and the data was entirely fabricated. Other times it was massaged. Only then was it passed to students or colleagues for inspection, analysis and write-up. “This conduct is deplorable,” the report says. …
Central to the longevity of Stapel’s fraud was that he was able to keep his fabricated raw data from so many people for many years without raising undue alarm. The report suggests this was possible because of “a lamentable … culture in social psychology and psychology research for everyone to keep their own data and not make them available to a public archive.”

As the BPS article also points out, these concerns about psychology research were being raised long before the Stapel scandal broke.

[A] 2006 paper by Jelte Wicherts and colleagues in American Psychologist found that just 27 per cent of psychology study authors they contacted were willing to share their data for re-analysis. … In another paper published this November, Wicherts and her team found that psychologists were less likely to share their data if the likelihood of errors being found was high or the strength of evidence was weak.

Worryingly widespread
Now, another study, currently in press in the journal Psychological Science, has found evidence that questionable research practices “are worryingly widespread among U.S. psychologists,” reports BPS’ Christian Jarrett.

The new study, led by Leslie John, an assistant professor of business administration at Harvard University, surveyed 6,000 U.S. academic psychologists about various research practices. To ensure that the answers were truthful, the survey was anonymous. (It also incorporated an incentive that encourages honesty.)

Here’s Jarrett’s description of the findings:

Averaging across the psychologists’ reports of their own and others’ behavior, the alarming results suggest that one in ten psychologists has falsified research data, while the majority has: selectively reported studies that “worked” (67 per cent), not reported all dependent measures (74 per cent), continued collecting data to reach a significant result (71 per cent), reported unexpected findings as expected (54 per cent), and excluded data post-hoc (58 per cent).
Participants who admitted to more questionable practices tended to claim that they were more defensible. Thirty-five per cent of respondents said they had doubts about the integrity of their own research. Breaking the results down by sub-discipline, relatively higher rates of questionable practice were found among cognitive, neuroscience and social psychologists, with fewer transgressions among clinical psychologists.

As Jarrett also notes, these findings may explain, the decline effect in psychological (and medical) research — the tendency of a particular effect to wane upon subsequent investigation.

Needless to say, the study’s findings offer a sobering assessment of today’s research practices.

“[Questionable research practices] … threaten research integrity and produce unrealistically elegant results that may be difficult to match without engaging in such practices oneself,” John and her Harvard colleagues conclude in their study. “This can lead to a ‘race to the bottom,’ with questionable research begetting even more questionable research.”

It also leads to the public wondering which research to take seriously.