Nonprofit, nonpartisan journalism. Supported by readers.


UCare generously supports MinnPost’s Second Opinion coverage; learn why.

Here are nine psychological phenomena that pass the replication test

The Flanker Task: Participants had to press the correct keyboard key as fast as possible to indicate whether a target stimulus was a vowel or consonant.

The field of psychology has been experiencing a “replication crisis” in recent years.

The problem was highlighted in a report that appeared in the journal Science in 2015. For that report, an international team of 270 researchers attempted to replicate the findings from 100 studies published in three leading psychology journals in 2008.

They succeeded with less than half of the studies.

That result doesn’t necessarily mean that the original studies’ findings were wrong, or that psychological research is pointless, but it does underscore why we need to retain skepticism when such studies (or any studies, for that matter) are published — including ones that appear to be well designed.

We need to wait for the studies to be replicated.

As I’ve noted here before, some of psychology’s most famous findings, ones that are widely accepted by the general public, have failed such tests.  These include the idea that adopting a superhero-like pose will make you feel more powerful, that smiling will make you happier and that washing your hands can help wash away feelings of guilt.

Passing the test

But some major psychological phenomena have been successfully replicated, including in circumstances in which replication is notoriously tricky — as when the participants have already been tested on the same effect.

Indeed, Dutch researchers recently replicated nine of cognitive psychology’s most important findings — findings that “present good news for the field of psychology,” they write.

In an article for BPS Research Digest (published by the British Psychological Society), psychologist and journalist Christian Jarrett summarizes how these researchers went about their replication task. “[They] tested hundreds of participants on Amazon’s Mechanical Turk survey website,” he writes (with British spellings). “Whichever cognitive effect they were tested on, each participant completed the test twice (either with the exact same stimuli or new versions), to see whether it made any difference to their behaviour or responses if they already had experience of the experiment.”

“In fact, all the effects in question were replicated on all occasions, whether on the first or second testing, and regardless of whether the specific stimuli — such as the words or pictures involved — were familiar or completely new,” he adds. 

Some examples

Here are Jarrett’s descriptions of three of the findings successfully replicated in the study, along with a brief explanation of their importance:

  • False MemoriesParticipants were shown sequences of words of related meaning. Tested on their memory of the words later, participants were more likely to mistakenly say that a new word of similar meaning had been present in the earlier sequence than a new word with a meaning unrelated to the earlier list. This is a basic demonstration of the fallibility of memory and how easy it is to feel like we’ve experienced something before when we haven’t.
  • The Flanker Task: Participants had to press the correct keyboard key as fast as possible to indicate whether a target stimulus was a vowel or consonant. Participants were faster to respond if the target was surrounded by distracting letters associated with the same response (e.g. a target vowel surrounded by irrelevant, distracting vowels), as opposed to being surrounded by distractors associated with a different response (e.g. a vowel surrounded by consonants). The task shows how we can’t help but process irrelevant information to a certain degree.
  • Motor Priming: Participants had to press the appropriate keyboard key as fast as possible in response to left- or right-facing arrows flashed on-screen. Preceding arrows (known as a prime) gave advance warning of which way the target arrows would point: sometimes these primes were accurate, which led to faster performance, as you’d expect; if the primes pointed the wrong way, they slowed performance. Crucially, some of the primes were “masked” to make them subliminal (i.e. not consciously visible), in which case the effects were reversed, with primes pointing the wrong way leading to faster responses. The finding shows how information that’s not consciously perceived can affect our behaviour, and that it can have an opposite effect when subliminal than when consciously perceived.

The other successfully replicated phenomenon were the Simon effect, the spacing effect, the serial position effect, associative priming, repetition priming and shape simulation.

FMI: You can read Jarrett’s descriptions of all nine phenomena on the BPS Research Digest website. The Dutch study is currently available as a preprint at PsyArXiv, where it can be read in full.

You can also learn about all our free newsletter options.

Comments (4)

  1. Submitted by Ron Gotzman on 06/09/2017 - 09:30 am.

    “settled science”

    Is a skeptic equal to a denier?

    • Submitted by Paul Brandon on 06/09/2017 - 02:00 pm.

      I’m not sure what you mean by

      They denote different types of conclusion.
      A skeptic withholds final judgement until sufficient evidence has been provided.
      A ‘denier’ is someone who denies the existence of some phenomenon -despite- the evidence.

  2. Submitted by Paul Brandon on 06/09/2017 - 02:05 pm.

    Statistical analysis

    Conventional statistical analysis is dichotomous — observations are assigned to one of two populations (significant or not significant) based on the result of a specified mathematical calculation. This is affected both by the size of the difference itself and by the size of the sample analyzed (the larger the sample, the smaller the difference needed for significance).
    Most of the studies in question had results that were just of the ‘significance’ line; the replications were just under it. This does not prove that the original studies were wrong in the sense of drawing an opposite conclusion, just that a reanalysis feel just short of making the case for a significant effect. This indicates that further and more rigorous study is needed.

  3. Submitted by Paul Udstrand on 06/14/2017 - 09:11 am.


    Unfortunately there has always been a lot of garbage science in the field of psychology. Part of the problem is the subject matter itself, unlike many other phenomena thoughts and emotions can’t be directly observed, they can only be inferred from behavioral responses. Psychology isn’t impossible to study, but it’s more difficult in many ways to design reliable studies, and many researchers fail when they try.

    In a lot of ways psychology has always struggled to establish scientific legitimacy. Note that in the last decade or so the study of “psychology” has largely been replaced by the field of “Neuro-Psychology”, as if the “neuro” makes it more scientific or legitimate in some way.

    Part of what might be happening is that decent scientists simply recognize the junk nature of many studies and don’t bother to even try to replicate them. Other times, as in the case of the case of the studies provided here, researchers will take on low hanging fruit as it were, simply because it builds their portfolio without requiring huge grant applications. All of these phenomena were clearly documented back in the 90’s and all of these studies are cheap and easy to replicate. These are all “studies” that look like contemporary assignments of the coursework I did in undergraduate psych labs at the U. back in the 80’s.

    The other problem is science reporting in the media. Replication is simply not covered or required. Science reporters simply scan journal announcements and secondary articles for interesting “results”. Those “results” are then reported with no methodological critique or examination. Now many times have you seen a single study reported as if a breakthrough of sime kind has been achieved? This way a lot of junk science get covered simply because it’s “flashy” or interesting for consumers. I’m always seeing “studies” with clear methodological problems being “reported” with no regard for study design, just a headline that will attract eyeballs, it doesn’t really matter whether or not the results are valid.

Leave a Reply