Psychology’s Replication Crisis

Is psychology experiencing a silent crisis?

Experts have been debating this very question for years, but a 2015 study has some ground-breaking information: the researchers had difficulty replicating more than half of the studies that had been conducted.

The media took the study and presented it as psychology having a credibility crisis, but the researchers told the American Psychological Association that reproducibility is a concern through all branches of science, and that they chose psychology simply because they are psychologists.

A Study of the “Replication Crisis”

The study, which was released by the Open Science Collaboration in November 2015, found that just 39 out of 100 experimental and correlational psychology studies showed the same results as the original study when repeated.

“Scientific claims should not gain credence because of the status or authority of their originator but by the replicability of their supporting evidence,” the study stated. “Even research of exemplary quality may have irreproducible empirical findings because of random or systematic error.”

The findings led researchers to conclude that most of the experimental studies used in this “produced weaker evidence for the original findings despite using materials provided by the original authors.”

What Contributes to This?

One contributing factor the researchers noted is that novelty and innovation are encouraged in scientific studies, whereas a test of an already published idea might be considered unoriginal.

“Innovation points out paths that are possible; replication points out paths that are likely; progress relies on both,” the study stated.

Another reason is that some studies are published in journals with less strict review policies, which means there’s a chance the number of replicable studies is actually lower. Also, journals are much more likely to publish studies with positive results instead of negative results, resulting in publication bias.

Additionally, there is evidence of questionable research practices and statistical techniques being used, like “p-hacking,” which make results look more significant than they really are. A Vox article gives an example of this: “Researchers can stop collecting data when their results reach statistical significance. That would be like flipping a coin, getting three heads in a row, and then concluding that coin flips always end on heads.”

This results in confirmation bias. Researchers aren’t trying to be deceptive, it’s just human nature to stop when our efforts give the result we want, without trying to reach a different outcome.

Should Psychologists Worry About Credibility?

Despite the uproar surrounding this study, its revelation is nothing new. More than 100 years ago, William James, author of Principles of Psychology, wrote that he was afraid psychology would not escape its “confused and imperfect state.” All areas of science have issues with replication – this is not limited to just psychology.

An article for Scientific American makes the point that “psychology is arguably healthier than many other fields precisely because psychologists are energetically exposing its weaknesses and seeking ways to overcome them.”

This particular study wasn’t aiming to disprove any studies, but rather to demonstrate the importance of replication and figure out why some studies can be replicated while others cannot. The study’s conclusion noted that “humans desire certainty, and science infrequently proves it. As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation.”

How Can Psychologists Interpret These Results?

So, what are psychologists doing to overcome the weaknesses? The co-authors of the study point out that inability to replicate doesn’t necessarily indicate the study was flawed, but instead, researchers need to realize that “science builds upon itself.”

Outcomes of the study seem to show that there is a need for reform in practices in research, review and publication, and that data needs to become more transparent.

In a post for Oxford University Press, Brian Schiff, chair of the department of psychology at the American University of Paris, addresses this study and how psychologists should proceed with their individual studies moving forward, saying that more checks and balances like visibility and pre-registering analysis plans should become a part of the study procedure.

Schiff goes on, however, to say that he believes there is a deeper issue needing to be addressed.

“Even if psychologists manage to produce more replicable research, we will continue to misinterpret our data and to avoid the most fundamental and pressing problems of psychology,” Schiff said. “Indeed, psychology’s conceptual problem is much more profound and entrenched than the current controversy imagines.”

Psychology is a science because it is a study of variables and how those variables relate to one another, Schiff claims. But, he says, variable-centered research isn’t the best method to look into how psychological processes operate.

“In order to get beyond the replication crisis, psychology needs a deep reflection on how the discipline operates to produce knowledge,” Schiff said. “Psychology needs to recognize that, even if they are replicable, variable-centered methods are ineffective tools for getting to those problems most fundamental to our understanding of psychology.”

Get program guide
Take the next step in your career with our program guide!