by: adam weseloh
The Research
According to Open Science Collaboration (2015), results of research should not gain notoriety based on the status or authority of their originator, but due to the replicability findings. They proposed replicability is key in giving credibility to the research finding. They conducted replications of 100 studies with both correlational and experimental designs. In this study, 97 percent of the studies have significant results. The researchers were able to replicate a total of 39 percent of the studies original results. The researcher proport to have used the same material from the original authors provided them. Overall, their research demonstrated weaker results when compared to the original studies.
In a rebuttal to Open Science Collaboration (2015), Gilbert et al. (2016) argues that they made three statistical errors. First the researchers not Open Science does not address the issue of error. They note sampling error along can contribute to 5 percent of false positives. This was problematic due the fact of sampling errors. The authors of the replication study used a different sample than the one originally sampled. Gilbert et al. (2016) provides evidence from Open Science where the original authors were measures American’s attitudes toward African Americans. This was replicated with Italians. This could have potentially changed the study because there are different stereotypes associated with each unique group. Gilbert et al. also addressed the issue of power. While Open Science only attempted to replicated the studies once, other studies replicated the studies several times and they saw an increase in successful replications. The authors also tackle the issue of potential bias. The Open Science researchers asked the original researcher in the study if they endorse the method of replication for their study and only 69 endorsed the methodology (Gilbert et al. 2016). It was found that replication rates for those that endorsed the methodology was significantly higher than those that did not endorse their methodology. Gilbert et al. (2016) determine that this was likely the result of bias.
Conclusion
Nosek and Lakens (2014) proposed direct replication can help promote generalizability of effects. They also proposed there is no such thing as a direct replication. I would tend to agree with the authors. It would be nearly if not impossible to test the same sample while also administering the experiment in the exact same manner. The focus of replication should then be focused on reproducing the results and expanding the previous study. While there is likely several instances where the results of studies have been disproved through a series of replications, it likely the Open Science Collaboration (2016) got in wrong when concluding only 36 percent of the studies were able to be replicated.
References
- Open Science Collaboration,(2015). Estimating the reproducibility of psychological science. Science. 349(6251).
- Gilbert, D.T, King, G., Pettigrew, S., Wilson, T.D., Comment on “Estimating the reproducibility of psychological science”. Retrieved from www.science.sciencemag.org
- Lynch, J.G, Bradlow, E.T., Huber, J.C., Lehmann, D.R., (2015). Reflection on the replication corner: In praise of conceptual replications. Intern J. of Research in Marketing. 333-342.
- Nosek, B.A., Laken, D., (2014). Registered reports a method to increase the credibility of published results. Social Psychology. 45(3). 137-141.