Questionable research practices “are worryingly widespread among U.S. psychologists.”
CC/Flickr/krisnelson
Questionable research practices “are worryingly widespread among U.S. psychologists.”

The integrity of psychological research (like medical research) has come increasingly under fire.

Earlier this year, a scandal erupted about the work of a prominent and extensively published psychologist, Diederik Stapel, most recently of Tilburg University in the Netherlands. He was found to have committed widespread academic fraud, which called into question his well-publicized findings on a variety of psychological topics, including racial stereotyping and advertising and identity.

Just this year he received a lot of press for his findings that claimed a messy environment led white people to discriminate more against black people and a diet high in red meat made people more selfish. (In the wake of the fraud charges again Stapel, Science magazine issued “an editorial expression of concern” last month regarding the racial stereotyping article.)

Stapel’s fraud apparently goes back at least a decade. How was he able to get away with it for so long? After all, his research was published in some of the world’s leading journals. An interim report on the scandal by Tilburg University officials offers a not-too-pleasant explanation of how he did it. Here’s a summary of the report’s findings (the original appears to have been published only in Dutch) from a news item posted earlier this week on the British Psychological Society’s (BPS) website:

According to the [Tilburg University report], Stapel’s “cunning, simple system” at Tilburg and earlier at Groningen University was to form intense one-on-one relationships with students and other researchers, to discuss hypotheses and methodologies with them at length, to prepare together the necessary materials, but to do all the apparent research collection himself at local schools. In many instances, the research never took place and the data was entirely fabricated. Other times it was massaged. Only then was it passed to students or colleagues for inspection, analysis and write-up. “This conduct is deplorable,” the report says. …

Central to the longevity of Stapel’s fraud was that he was able to keep his fabricated raw data from so many people for many years without raising undue alarm. The report suggests this was possible because of “a lamentable … culture in social psychology and psychology research for everyone to keep their own data and not make them available to a public archive.”

As the BPS article also points out, these concerns about psychology research were being raised long before the Stapel scandal broke.

[A] 2006 paper by Jelte Wicherts and colleagues in American Psychologist found that just 27 per cent of psychology study authors they contacted were willing to share their data for re-analysis. … In another paper published this November, Wicherts and her team found that psychologists were less likely to share their data if the likelihood of errors being found was high or the strength of evidence was weak.

Worryingly widespread
Now, another study, currently in press in the journal Psychological Science, has found evidence that questionable research practices “are worryingly widespread among U.S. psychologists,” reports BPS’ Christian Jarrett.

The new study, led by Leslie John, an assistant professor of business administration at Harvard University, surveyed 6,000 U.S. academic psychologists about various research practices. To ensure that the answers were truthful, the survey was anonymous. (It also incorporated an incentive that encourages honesty.)

Here’s Jarrett’s description of the findings:

Averaging across the psychologists’ reports of their own and others’ behavior, the alarming results suggest that one in ten psychologists has falsified research data, while the majority has: selectively reported studies that “worked” (67 per cent), not reported all dependent measures (74 per cent), continued collecting data to reach a significant result (71 per cent), reported unexpected findings as expected (54 per cent), and excluded data post-hoc (58 per cent).

Participants who admitted to more questionable practices tended to claim that they were more defensible. Thirty-five per cent of respondents said they had doubts about the integrity of their own research. Breaking the results down by sub-discipline, relatively higher rates of questionable practice were found among cognitive, neuroscience and social psychologists, with fewer transgressions among clinical psychologists.

As Jarrett also notes, these findings may explain, the decline effect in psychological (and medical) research — the tendency of a particular effect to wane upon subsequent investigation.

Needless to say, the study’s findings offer a sobering assessment of today’s research practices.

“[Questionable research practices] … threaten research integrity and produce unrealistically elegant results that may be difficult to match without engaging in such practices oneself,” John and her Harvard colleagues conclude in their study. “This can lead to a ‘race to the bottom,’ with questionable research begetting even more questionable research.”

It also leads to the public wondering which research to take seriously.

Join the Conversation

4 Comments

  1. This calls to mind research I read of that indicated young kids who were anxious and indecisive grew up to be conservative while those who were self-reliant and energetic grew up to be liberals. (Findings that, I admit, confirmed my biases.)

    When I searched for more info on this research, I found critiques of it that indicated the researcher would not share his dataset and similar practices. I’m not fully remembering the details but it was enough to shake any weight I might have given to the research.

  2. It’s amazing to me that ANY “no-peeky” research (researcher fails to make full data and methodology available) – 73% of cases in the 2006 study cited – is published AT ALL!!

    To say that such practices “threaten research integrity” is quite the understatement of the problem.

    Simply calling “research” like this CRAP would be more to the point.

  3. I suspect research is falsified because researchers lack adequate models of the human psyche upon which to base their research. Even if they find something, they have no rational basis by which to judge its meaning nor generalize its value to the general population.

    I also can’t help but wonder to what extent these questionable research practices are used in the evaluation of the latest (and, of course, most expensive) psychotherapeutic wonder drugs.

    I suspect for those well qualified to prescribe such drugs and who go to the trouble of evaluating their effectiveness in actual patients, the results they see in their patients often bear precious little connection to those reported by research studies involving those drugs.

    Of course the pharmaceutical companies have sidestepped that whole issue by having huge quantities of their psychotherapeutic drugs prescribed by GPs with precious little knowledge of other modes of therapy and precious little follow up (if that particular GP even possesses sufficient knowledge to be able to DO reasonably adequate follow up).

    Since the entire psychological community lacks even the most rudimentary working model of how the human psyche actually functions,…

    let alone a model for how the systems buried within each of us, some of them very primitive and ancient and still attuned to circumstances that have long since ceased to exist for most of us,…

    routinely misfire and create maladapted, dysfunctional patterns of thinking and behavior that are largely invisible to and as incomprehensible to those who exhibit them as they are to those around them (and often quite self destructive).

    Lacking an appreciation for the way the psyche actually works, most “psychotherapy” is little better than searching through your ad hoc bag of shot-in-the-dark tricks hoping to find one that might have some positive effect…

    (while failing to understand why it does or does not work, nor why something that works for one patient fails miserably for another, seemingly similar patient).

    But hey! A LOT of money is being made so I guess nobody should complain.

Leave a comment