By: Ian McKay
Due to increases in cases of fraudulent scientific practices, efforts to replicate scientific findings have been given considerable attention. Consequently, the robustness of major social scientific findings that continue to frequently be cited in popular blogs, the press, and academic journals have lost credibility. For example, the commonly referenced Romeo and Juliet Effect (i.e., the idea that parental interference in one’s romantic relationship can lead to increases in love) was replicated by Sinclair et al., (2014). Interestingly, researchers were unable to replicate this effect, both popularized and coined by Driscoll, Davis, and Lipetz’s (1972) longitudinal study.
Many research teams including the Open Science Collaboration (OCS) and the Many Labs Replication Project, have attempted to estimate the reproducibility of psychological science by attempting to replicate numerous experimental and correlational studies. Findings from the OCS research project appeared to affirm fears regarding this disheartening predicament our field currently faces. Despite using comprehensive methodologies employed to accurately replicate a study’s findings, (i.e., achieving appropriate statistical power, using original study materials, and obtaining advice and or guidance from the original authors), a high rate (64%) of research studies failed to replicate. To the contrary, conclusions drawn from the Many Labs Replication Project had produced dissimilar results. Further, a much higher and more optimistic percentage of their studies (76%) had been successfully replicated.
Though not limited to these two aforementioned replication projects, the opinions of scientific consumers tend to be twofold. In other words, researchers tend to either consider the inability to replicate as an area of major concern, while others may see it as in inability to successfully engage in respectable replication practices. Regardless, such failures to replicate research should not be perceived as a personal failure, nor received with shame in the scientific community, but rather as scientific critique, in searching for truth. Due to the inconsistent nature of results that have surfaced concerning the ability to replicate scientific findings, potential contributing factors should be mentioned.
Because a “gold-standard” practice for assessing the success of replication efforts does not exist, variability in metrics and methodologies have been reported to be a major source of unreliable findings. For example, as mentioned by Gilbert et al., (2016), replication scientists may each be concerned with different metrics of assessing replication success (e.g., statistical significance, effect sizes, statistical power, sample sizes, etc.), which are then likely produce dissimilar results (Gilbert, King, Pettigrew, & Wilson, 2016). Additionally, as social psychology is rooted in “evidence-based” scientific practices, the reliance on published studies often contributes to misleading impressions of scientific objectivity. Furthermore, researchers at public universities are also often pressured to publish significant findings in order to obtain tenure and or to seek occupational promotions. As a result, researchers may publish results that are either biased and or grounded in “weak-statistics”, which may set the stage for failed replication attempts.
Moreover, many researchers have debated whether replications should be “direct” or “conceptual” in nature. A direct replication occurs when a new set of authors attempt to replicate the original experimental procedure to an exact degree as possible. On the other hand, conceptual replication is the use of different authors, different methods, and different measures to repeat the test of a hypothesis or experimental result. The employment of each strategy, particularly, direct replications, have caused difficulties reproducing results identical to the original study. Further, critics of direct replication efforts have argued against its utility, noting that its not only “impossible” to directly replicate a study, but causes limited abilities to determine the generalizability of findings, which they argue, should certainly be an important aim of any scientific researcher (Lynch, Bradlow, Huber, Lehmann, 2015).
Though replication failures may be seen as threatening to the science of social psychology, solutions are certainly within reach. For example, replication efforts might improve through the use and or development of a universal metric for assessing the success of scientific replications. Policies can also be instituted mandating authors to complete replications prior to becoming published, which may increase the strength of published findings within the literature. Likewise, continued efforts should be made to encourage open communication between the original researchers and the replication researchers to help better reproduce the methods and procedures used in the original research study. Furthermore, prior to being disseminated to the general public, researchers should be required to conduct and or include results from relevant meta-analyses to validate and or proclaim an effect as being more reliable.
Driscoll, R., Davis, K. E., & Lipetz, M. E. (1972). Parental interference and romantic love: The Romeo and Juliet effect. Journal of Personality and Social Psychology, 24(1), 1-10.
http://dx.doi.org/10.1037/h0033373
Gilbert, D. T., King, G., Pettigrew, S., & Wilson, T. D. (2016). Comment on “Estimating the reproducibility of psychological science.” Science, 351(6277), 1037. https://doi.org/10.1126/science.aad7243
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Jr., Bahník, Š., Bernstein, M. J., . . . Nosek, B. A. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45(3), 142-152.
http://dx.doi.org/10.1027/1864-9335/a000178
Lynch, John & T. Bradlow, Eric & C. Huber, Joel & R. Lehmann, Donald. (2015). Reflections on the Replication Corner: In Praise of Conceptual Replications. International Journal of Research in Marketing. 32. 10.1016/j.ijresmar.2015.09.006.
Nosek, B. A., Cohoon, J., Kidwell, M., & Spies, J. R. (2016). Estimating the Reproducibility of Psychological Science. doi:10.31219/osf.io/447b3
Photograph of Conceptual vs. Direct replication (2019). Retrieved from http://rock-cafe.info/posts/replication-psychology-7265706c69636174696f6e.html
Sinclair, H. Colleen & Hood, Kristina & Wright, Brittany. (2014). Revisiting the Romeo and Juliet Effect (Driscoll, Davis, & Lipetz, 1972) Reexamining the Links Between Social Network Opinions and Romantic Relationship Outcomes. Social Psychology. 45. 10.1027/1864-9335/a0.
Stanley, T. D., Carter, E. C., & Doucouliagos, H. (2018). What meta-analyses reveal about the replicability of psychological research. Psychological Bulletin, 144(12), 1325-1346.
http://dx.doi.org/10.1037/bul0000169
Tenure. (n.d.). Retrieved January 14, 2019, from https://www.aaup.org/issues/tenure