Skip to main content
ScienceSIOSSpiegeloog 428: Alive

Open Science Practices- Should I Implement Them?

By November 6, 2023No Comments

About the Author

Mohammad is a recent graduate from the Research Master Psychology at the UvA. He is interested in social cognitive neuroscience, and specifically in political neuroscience, asking questions about how political and ideological phenomena are represented in the brain.

About the Author

Mohammad is a recent graduate from the Research Master Psychology at the UvA. He is interested in social cognitive neuroscience, and specifically in political neuroscience, asking questions about how political and ideological phenomena are represented in the brain.

This essay is an edited and web-adapted version of an original essay for the University of Amsterdam’s RMS Psychology programme’s Good Research Practices course in December 2021, directed by Prof. Dr. E. J. Wagenmakers and Dr. D. Matzke.

Saturday December 11, 2021 

Dear Dr. Addens, 

I hope that you are doing well. I am writing to you because of a paper I have just read: a meta-analysis of over 12,000 effects in psychology has revealed that 92% of these effects have a low statistical power (Stanley et al., 2018). This is bad news because these results are unlikely to replicate, making psychology look unreliable as a science due to our results being less robust than those of other fields (Chambers, 2019, p. 173). In other words, psychology is in a replication crisis (Maxwell, Lau, & Howard, 2015), caused by what is termed an exploitation of ‘researcher degrees of freedom’ where arbitrary decisions in methodology, statistics or report can lead to a false discovery (Berman, Pekelis, Scott, & Van den Bulte, 2018; Wicherts et al., 2016). 

There are different ways to exploit ‘researcher degrees of freedom’. Firstly, many researchers collect data and test for significance, only to find no effect. Then, they continue to collect data and test for significance frequently, sometimes after every single new participant until significance is achieved (John, Loewenstein, & Prelec, 2012; Simmons, Nelson, & Simonsohn, 2011). Secondly, outlying data points are excluded to achieve positive results- and this is not done uniformly. For example, a researcher might remove outliers from one analysis but include them in another, because this way both tests yield positive results (John et al., 2012). Thirdly, there is HARKing – hypothesizing after results are known. Many researchers find that their data do not support their hypothesis and ‘torture’ the data by removing or including conditions, trials, participants or variables. Eventually, they find a positive effect and even if it was not their original hypothesis they present these positive findings as if they were the original hypotheses all along (Hollenbeck & Wright, 2016; Wicherts et al., 2016). HARKing is related to publication bias, where non-significant results are not reported (Stanley et al., 2018). Interestingly, some researchers consider HARKing a questionable research practice [QRP; e.g., John et al. (2012)], while others have deemed it essential for science as long as this analysis is explicitly labeled as exploratory (as opposed to confirmatory) and replicated in another study (Hollenbeck & Wright, 2016; Simmons, Nelson, & Simonsohn, 2013). 

What is the consequence of this? p-hacking. In Null Hypothesis Significance Testing we calculate the p-value, the probability of observing an obtained positive result, or one which is greater than it, under the null hypothesis (Chambers, 2019, p. 24). In psychology we aim for a p-value lower than .05 in order to reject the null hypothesis (but you probably already knew that). However, in practice, researchers can push the p-value to just after the cutoff point using the methods above (Head, Holman, Lanfear, Kahn, & Jennions, 2015). This is interesting because although the p-value has crossed the threshold point, in reality the result is not significant. This is because of ‘multiple comparisons’ where the data is examined many times and causes the ‘real’ threshold to increase. Researchers should then use smaller p-values, but they do not (Drachman, 2012). 

However, I found that there are solutions to these problems, such as pre-registration (Simmons, Nelson, & Simonsohn, 2020; Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012), which requires researchers to state their research question, methods, analyses and predicted results prior to data collection. If they require feedback on their plan, they can submit a registered report, and receive in-principle acceptance, where the journal agrees to publish their paper regardless of results (Chambers, 2013). These plans also require planning the sample size (Simmons et al., 2013) and exclusion criteria for data points (Nosek et al., 2019). You can conduct an a-priori power analyses to calculate your sample size (Wicherts et al., 2016). Tools like G*Power (Faul, Erdfelder, Buchner, & Lang, 2009) help to determine the minimum sample size necessary to find a true-positive by entering details about the experimental design. Practically speaking, this will help you adjust your experimental design based on your budget while maintaining a high power. For example, you could choose to collect more data per participant, run fewer participants and thus increase your statistical power. 

This is part of the greater open science reforms, which strive to make science accessible and transparent (Flake, 2021). What does this mean for psychology? By submitting a registered report or pre-registration, researchers make their aims and methods clear. The new guidelines prioritize good research practices and robustness over result novelty, so the false discovery rate should decrease and findings should successfully replicate. Additionally, when data and code are published, researchers can question the work of other researchers. To maximize reproducibility, they should follow the “21 Word Solution” which states that everything should be included: spare no detail of your experimental design and statistical analyses (Simmons, Nelson, & Simonsohn, 2012; Stanley et al., 2018). While this is extra work for already busy researchers, it is a step in the right direction and is ultimately better for science. 

Please let me know what you think ASAP! How will you implement these new practices into your own research? 

Best, 

Dr. Phoebe Enns

Sunday December 12, 2021 

Hello Dr. Enns, 

I am devastated to read your views on this topic! This so-called “replication crisis” has caused so much pain. In my view it is more of a reputation crisis! The “open access” (OA) and “replication” culture has led some researchers to doubt their own work and be treated like “criminals” because others have failed to replicate their results (Schnall, 2015). This immediately leads to accusations of fraud or scientific misconduct, while ignoring two important things. Firstly, the possibility that it is the replications which have a low statistical power rather than the original experiment (Maxwell et al., 2015); and secondly, regression to the mean predicts that effect size of successful replications are smaller than the original studies (Lehrer, 2010). This open science movement is terrible and has led to the creation of websites such as replicationindex.com, which is a smear campaign against highly reputable researchers. Already on the home page I read negative words like “untrustworthy” and “outlandish claims”, while the rest of the site is dedicated to targeting figures such as John Ioannidis though articles like “Ioannidis is Wrong Most of the Time” (Schimmack, 2020). How is this acceptable? 

Lack of professionalism aside, even if I intended to follow OA guidelines, I would not know where to begin. For example, the “Transparency, Openness, and Reproducibility guidelines” (TOP, Nosek et al., 2016) encourage researchers to disclose all data and code used to conduct the experiment and analyses. Further, the “Workflow for Open Reproducible Code in Science” (WORCS, Van Lissa et al., 2021) suggests that scientists work with reproducible manuscripts from the start: i.e., that they work with something called R Markdown which allows them to write their papers and code in the same document, thus reducing human errors and allowing for easy reproducibility. This is intriguing but how does it even work? I am far too busy to learn new skills! Likewise, pre-registering will take a long time. I will have to prepare the required information (such as sample size) which is time consuming. If I choose to submit a registered report, I will have to wait at least 8 weeks for feedback before I can start collecting data (Chambers, Dienes, McIntosh, Rotshtein, & Willmes, 2015). If I preregister instead, should I do it on the Open Science Framework (OSF, Foster & Deardorff, 2017) or on AsPredicted.org? How do I know which is the right one? The lack of a clear path makes me doubt how valid the open science reforms are. 

In contrast to these unclear guidelines, scientific journals, which have been used since the 17th century (Kronick, 1988), have clear guidelines on publication and do not require transparency as a prerequisite for peer-review. Although some of them encourage potential authors to follow these open science guidelines, these are not always enforced (Chambers, 2019, p. 80), and many journals do not encourage them! For example, 35 out of the 50 high-impact journals sampled by Nosek et al. (2021) did not require data transparency and 41 did not require analysis transparency. Therefore, I suspect that these rules were put in place to appease some researchers. OA journals, such as PLOS ONE have a lower prestige (Chambers, 2019, p. 128). I personally prefer to publish in Nature or Psychological Science, which are not fully OA and give me more freedom to analyze my data according to my own judgment (Chambers, 2019, p. 130). In addition to this, they are more prestigious and you get your money’s worth by publishing there (Corbyn, 2013). 

I hope that I have convinced you to drop these radical new ideas! 

Sincerely, 

Dr. Reed Addens

Tuesday December 14, 2021 

Dear Dr. Addens, 

I read your concerns, and they are valid! After all, open science is a new concept in psychology, and has evidently caused a lot of public pain and doubt. However, I do recommend that you read Simmons et al. (2020), they address many of your worries! For example, they explain how pre-registration saves you time because all your analyses and parts of your future paper are already written out. It does not take more time: it shifts the effort onto the earlier stages of work. 

Regarding your concerns about reputation – thankfully I have not heard of anyone who has lost their job due to a replication failure. Only those who committed fraud, such as Diederik Stapel, lost their jobs. Stapel was a professor at Tilburg University who faked data for 55 peer-reviewed studies (“Committee Levelt | Tilburg University,” 2012; Markowitz & Hancock, 2014). Replications are not personal vendettas but are meant to improve research. 

Furthermore, I agree that there is not one clear path for pre-registration. However, this is expected because the idea is still in its early stages (Chambers, 2019, p. 198). It will develop over the next few years, and we will find the best fit. 

Unfortunately, I disagree with your stance on the authority of journals because they are now corrupt. They do not want what is best for science, but only what is best for them and their publishers’ pockets. For example, the peer-review system is used to provide criticism on others’ unpublished work in order to ensure its quality (Alberts, Hanson, & Kelner, 2008), but journals have used this system to exploit researchers. They are expected to peer-review papers and are not paid to do this. It is estimated that “reviewers globally worked on peer reviews [for] over 100 million hours in 2020”, labor which is valued at over 1.5 billion USD (Aczel, Szaszi, & Holcombe, 2021). Meanwhile, journals and their publishers profit from this free labor. For example, take Dr. Arthur C. Evans JR., the CEO of the American Psychological Association (APA). Their journals rely on peer-review and include Psychology & Neuroscience, Experimental and Clinical Psychopharmacology and Journal of Threat Assessment And Management. Dr. Evans made over 900,000 USD in 2019 and did not pay income tax due to the APA’s exemption (“Financial Information,” 2019: p. 7). Interesting how his salary is so high, while peer-reviewers do not get paid. 

On the topic of openness and transparency, like you said: many journals do not encourage OA behaviors, and they suffer from publication bias, meaning that they accept only papers with novel, interesting or weird findings (Hardwicke et al., 2020). This could nudge researchers towards QRPs so that they get published. Furthermore, they almost punish researchers who want to make their work OA. For example, journals charge authors an extra fee for making their work OA, known as the Article Publishing Charge (APC). By doing this, their work is made accessible to all immediately with publication. The alternative is waiting for 12 months, where it will become OA for free (Chambers, 2019, pp. 128–129), which might seem appealing considering that the APC is pretty hefty. For example, for Methods in Psychology, it is 1,800 USD before taxes (“Methods in Psychology – Journal – Elsevier,” n.d.), it is 6,700 USD for Trends in Cognitive Sciences (“Cell Press,” 2021) and 5,560 USD for Nature Communications (“Article Processing Charges | Nature Communications,” 2021). Luckily, journal fees are being contested. Universities in the Netherlands have reached a deal with Elsevier where Dutch researchers do not have to pay an APC (“Open Access Agreement for VSNU (Nl) | Elsevier,” 2020). In the US, Harvard’s Faculty of Arts and Sciences introduced an OA mandate in 2008. This forces researchers to make their work accessible to all, and cheaper for the university to publish (Priest, 2012). This allows the public (e.g., journalists, doctors and teachers) who financed this research to access its article for free! Chambers (2019) estimates that without a subscription, the average access fee is 30 USD (p. 127). In short: journals profit by not paying for peer-reviews, they then charge authors for publishing their work, for making it OA and later for accessing it (if it is not OA). The taxpayer, who funds this research, has to pay a journal in order to see the outcome. 

The fight for OA is just getting started! Over the past few years scholars have developed new modes of communicating information. For example, the online platform arXiv allows researchers to openly publish pre-prints of their articles (McKiernan, 2000). This allows them to share their ideas regardless of if or when a scientific journal accepts their manuscript. There is also Sci-Hub, which is a repository founded in 2011 by Alexandra Elbalkyan, and researchers to bypass journals’ paywalls to access most published articles (Himmelstein et al., 2018). Clearly, journals are threatened by researchers’ wishes to make their work accessible. For example, Nature has expressed its discontent with Sci-Hub (Alexandra Elbakyan, 2021b; Else, 2021), and their Twitter account was suspended (Alexandra Elbakyan, 2021a). This just means that the revolution is working!! Additionally, scholars start to use less formal and non-academic platforms to communicate. Many researchers use social media platforms such as Twitter to share their ideas, research and data, as well as to interact with other researchers (Álvarez-Bornstein, 2019; Letierce, Passant, Decker, & Breslin, 2010). Eventually, these new communication channels could replace the current ‘traditional’ modes of communication such as conferences or emails. Maybe in 2030 it will be acceptable to have your Twitter handle on your CV or lab website rather than your email address – who can tell? 

Let me know if you make an account, so we can be Twitter buddies! 

Dr. Phoebe Enns

Monday December 20, 2021 

Hello Dr. Enns, 

I have to say I had no idea that journals were so corrupt. I relied on them because I felt that they provide uniformity and order, that their guidelines make sure that all scientific papers look similar and are therefore easier to navigate through. But reading your letters has me doubting this completely. I think that science is at a stage where researchers can come together to form models that can replace journals. It seems that we are already coming up with ways to improve research by making everything transparent and accessible, despite the discontent of journals. I think that the next step would be to create a system which takes peer-reviews into our own hands and make them more transparent and uniform. Combine that with the alternative methods of publication you mentioned above, and (non-OA) journals become redundant in the scientific ecosystem. Science is much better without them. 

Your arguments are very compelling, and I have decided to give these open science practices a try. If everybody else is doing it, then what have I got to lose? After all, I want what is best for science. I will therefore submit a registered report for my next experiment (feedback is always good!) and learn how to use G*Power and everything necessary to improve the quality of my discoveries. It will probably take me a few tries to get it right. I have already learned the basics of R Markdown by following the instructions written by Alzahawi (2021) and was able to compile this letter to send to you! 

Hope to hear from you again soon! 

Dr. Reed Addens

Student Initiative for Open Science

This article has been written as part of an ongoing collaborative project with the Student Initiative for Open Science (SIOS). The Amsterdam-based initiative is focused on educating undergraduate- and graduate-level students about good research practices.

References

I have used R (Version 4.2.2; R Core Team, 2022) and the R-packages papaja (Version 0.1.1.9001; Aust & Barth, 2022), and tinylabels (Version 0.2.3; Barth, 2022) with Zotero (Version 5.0.96.3) for writing my paper and citing my sources. 

  • Aczel, B., Szaszi, B., & Holcombe, A. O. (2021). A billion-dollar donation: Estimating the cost of researchers’ time spent on peer review. Research Integrity and Peer Review, 6 (1), 14. https://doi.org/10.1186/s41073-021-00118-2
  • Alberts, B., Hanson, B., & Kelner, K. L. (2008). Reviewing Peer Review. Science, 321 (5885), 15–15. https://doi.org/10.1126/science.1162115
  • Alexandra Elbakyan. (2021a, December 12). On January, 6 2021 Twitter has banned Sci-Hub account @Sci_Hub with 185K followers. People are currently trying to contact Twitter to unblock it. Until then you will be able to get Sci-Hub news here: @Sci_Hub_tweets [Tweet]. Retrieved December 16, 2021, from https://twitter.com/ringo_ring/status/1469844242998693888
  • Alexandra Elbakyan. (2021b, December 14). Nature has actually contacted me for comment about accusations that Sci-Hub is a threat, here is my full response / it is clear that academic publishers care about their money, not about security of other people https://t.co/f3LvOK46lf [Tweet]. Retrieved December 16, 2021, from https://twitter.com/ringo_ring/status/1470815566160179201
  • Álvarez-Bornstein, B. (2019). Who is interacting with researchers on Twitter? : a survey in the field of Information Science. JLIS.it, 87–106. https://doi.org/10.4403/jlis.it-12530
  • Alzahawi, S. (2021, July 11). Writing reproducible manuscripts in R. Retrieved December 20, 2021, from https://shilaan.rbind.io/post/writing-reproducible-manuscripts-in-r/ Article processing charges | Nature Communications. (2021). Retrieved December 16, 2021, from https://www.nature.com/ncomms/article-processing-charges
  • Aust, F., & Barth, M. (2022). papaja: Prepare reproducible APA journal articles with R Markdown. Retrieved from https://github.com/crsh/papaja
  • Barth, M. (2022). tinylabels: Lightweight variable labels. Retrieved from https://cran.r-project.org/package=tinylabels
  • Berman, R., Pekelis, L., Scott, A., & Van den Bulte, C. (2018). P-Hacking and False Discovery in A/B Testing (SSRN Scholarly Paper No. ID 3204791). Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.3204791 Cell Press: Cell Press. (2021). Retrieved December 16, 2021, from https://www.cell.com/rights-sharing-embargoes
  • Chambers, C. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49 (3), 609–610. https://doi.org/10.1016/j.cortex.2012.12.016
  • Chambers, C. (2019). The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice. In The Seven Deadly Sins of Psychology. Princeton University Press. https://doi.org/10.1515/9780691192031
  • Chambers, C., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered Reports: Realigning incentives in scientific publishing. Cortex, 66, A1–A2. https://doi.org/10.1016/j.cortex.2015.03.022 Committee Levelt | Tilburg University. (2012). Retrieved December 20, 2021, from https://www.tilburguniversity.edu/nl/over/gedrag-integriteit/commissie-levelt
  • Corbyn, Z. (2013). Price doesn’t always buy prestige in open access. Nature. https://doi.org/10.1038/nature.2013.12259
  • Drachman, D. (2012). Adjusting for Multiple Comparisons. Journal of Clinical Research Best Practices, 8 (7), 1–3.
  • Else, H. (2021). What Sci-Hub’s latest court battle means for research. Nature, 600 (7889, 7889), 370–371. https://doi.org/10.1038/d41586-021-03659-0
  • Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41 (4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149 Financial Information. (2019). Retrieved December 17, 2021, from https://www.apa.org/about/finance
  • Flake, J. K. (2021). Strengthening the foundation of educational psychology by integrating construct validation into open science reform. Educational Psychologist, 56 (2), 132–141. https://doi.org/10.1080/00461520.2021.1898962
  • Foster, E. D., & Deardorff, A. (2017). Open Science Framework (OSF). Journal of the Medical Library Association : JMLA, 105 (2), 203–206. https://doi.org/10.5195/jmla.2017.88
  • Hardwicke, T. E., Serghiou, S., Janiaud, P., Danchev, V., Crüwell, S., Goodman, S. N., & Ioannidis, J. P. A. (2020). Calibrating the Scientific Ecosystem Through Meta-Research. Annual Review of Statistics and Its Application, 7 (1), 11–37. https://doi.org/10.1146/annurev-statistics-031219-041104
  • Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The Extent and Consequences of P-Hacking in Science. PLOS Biology, 13 (3), e1002106. https://doi.org/10.1371/journal.pbio.1002106
  • Himmelstein, D. S., Romero, A. R., Levernier, J. G., Munro, T. A., McLaughlin, S. R., Greshake Tzovaras, B., & Greene, C. S. (2018). Sci-Hub provides access to nearly all scholarly literature. eLife, 7, e32822. https://doi.org/10.7554/eLife.32822
  • Hollenbeck, J. R., & Wright, P. M. (2016). Harking, Sharking, and Tharking: Making the Case for Post Hoc Analysis of Scientific Data. Journal of Management, 43 (1), 5–18. https://doi.org/10.1177/0149206316679487
  • John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23 (5), 524–532. https://doi.org/10.1177/0956797611430953
  • Kronick, D. A. (1988). Review of A Historical Catalogue of Scientific Periodicals, 1665-1900, with a Survey of Their Development. Libraries & Culture, 23 (2), 243–245. Retrieved from https://www.jstor.org/stable/25542063
  • Lehrer, J. (2010, December 6). The Truth Wears Off. Retrieved December 12, 2021, from https://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off
  • Letierce, J., Passant, A., Decker, S., & Breslin, J. (2010). Understanding how Twitter is used to spread scientific messages.
  • Markowitz, D. M., & Hancock, J. T. (2014). Linguistic Traces of a Scientific Fraud: The Case of Diederik Stapel. PLOS ONE, 9 (8), e105937. https://doi.org/10.1371/journal.pone.0105937
  • Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70 (6), 487–498. https://doi.org/10.1037/a0039400
  • McKiernan, G. (2000). arXiv.org: The Los Alamos National Laboratorye-print server. International Journal on Grey Literature, 1 (3), 127–138. https://doi.org/10.1108/14666180010345564
  • Methods in Psychology – Journal – Elsevier. (n.d.). Retrieved December 16, 2021, from https://journals.elsevier.com/methods-in-psychology
  • Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S., Breckler, S., . . . DeHaven, A. C. (2016). Transparency and Openness Promotion (TOP) Guidelines. https://doi.org/10.31219/osf.io/vj54c
  • Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., . . . Vazire, S. (2019). Preregistration Is Hard, And Worthwhile. Trends in Cognitive Sciences, 23 (10), 815–818. https://doi.org/10.1016/j.tics.2019.07.009
  • Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Almenberg, A. D., . . . Vazire, S. (2021). Replicability, Robustness, and Reproducibility in Psychological Science. https://doi.org/10.31234/osf.io/ksfvq
  • Open Access Agreement for VSNU (Nl) | Elsevier. (2020). Retrieved December 16, 2021, from https://www.elsevier.com/open-access/agreements/VSNU-NL
  • Priest, E. (2012). Copyright and the Harvard Open Access Mandate. Northwestern Journal of Technology and Intellectual Property, 10 (7), 377–440. Retrieved from https://heinonline.org/HOL/P?h=hein.journals/nwteintp10&i=408
  • R Core Team. (2022). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project.org/
  • Schimmack, U. (2020, December 24). Ioannidis is Wrong Most of the Time. Retrieved December 12, 2021, from https://replicationindex.com/2020/12/24/ioannidis-is-wrong/
  • Schnall, S. (2015, June 23). Simone Schnall on her Experience with a Registered Replication Project | SPSP. Retrieved December 12, 2021, from https://www.spsp.org/news-center/blog/simone-schnall-on-her-experience-with a-registered-replication-project
  • Simmons, J., Nelson, L. D., & Simonsohn, U. (2012). A 21 Word Solution (SSRN Scholarly Paper No. ID 2160588). Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.2160588
  • Simmons, J., Nelson, L., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22 (11), 1359–1366. https://doi.org/10.1177/0956797611417632
  • Simmons, J., Nelson, L., & Simonsohn, U. (2013). Life after P-Hacking. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2205186
  • Simmons, J., Nelson, L., & Simonsohn, U. (2020). Pre-registration: Why and How. Journal of Consumer Psychology, 31 (1), 151–162. https://doi.org/10.1002/jcpy.1208
  • Stanley, T., Carter, E. C., & Doucouliagos, H. (2018). What Meta-Analyses Reveal About the Replicability of Psychological Research. Psychological Bulletin. https://doi.org/10.1037/bul0000169
  • Van Lissa, C. J., Brandmaier, A. M., Brinkman, L., Lamprecht, A.-L., Peikert, A., Struiksma, M. E., & Vreede, B. M. I. (2021). WORCS: A workflow for open reproducible code in science. Data Science, 4 (1), 29–49. https://doi.org/10.3233/DS-210031
  • Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An Agenda for Purely Confirmatory Research. Perspectives on Psychological Science, 7 (6), 632–638. https://doi.org/10.1177/1745691612463078
  • Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking. Frontiers in Psychology, 7, 1832. https://doi.org/10.3389/fpsyg.2016.01832

This essay is an edited and web-adapted version of an original essay for the University of Amsterdam’s RMS Psychology programme’s Good Research Practices course in December 2021, directed by Prof. Dr. E. J. Wagenmakers and Dr. D. Matzke.

Saturday December 11, 2021 

Dear Dr. Addens, 

I hope that you are doing well. I am writing to you because of a paper I have just read: a meta-analysis of over 12,000 effects in psychology has revealed that 92% of these effects have a low statistical power (Stanley et al., 2018). This is bad news because these results are unlikely to replicate, making psychology look unreliable as a science due to our results being less robust than those of other fields (Chambers, 2019, p. 173). In other words, psychology is in a replication crisis (Maxwell, Lau, & Howard, 2015), caused by what is termed an exploitation of ‘researcher degrees of freedom’ where arbitrary decisions in methodology, statistics or report can lead to a false discovery (Berman, Pekelis, Scott, & Van den Bulte, 2018; Wicherts et al., 2016). 

There are different ways to exploit ‘researcher degrees of freedom’. Firstly, many researchers collect data and test for significance, only to find no effect. Then, they continue to collect data and test for significance frequently, sometimes after every single new participant until significance is achieved (John, Loewenstein, & Prelec, 2012; Simmons, Nelson, & Simonsohn, 2011). Secondly, outlying data points are excluded to achieve positive results- and this is not done uniformly. For example, a researcher might remove outliers from one analysis but include them in another, because this way both tests yield positive results (John et al., 2012). Thirdly, there is HARKing – hypothesizing after results are known. Many researchers find that their data do not support their hypothesis and ‘torture’ the data by removing or including conditions, trials, participants or variables. Eventually, they find a positive effect and even if it was not their original hypothesis they present these positive findings as if they were the original hypotheses all along (Hollenbeck & Wright, 2016; Wicherts et al., 2016). HARKing is related to publication bias, where non-significant results are not reported (Stanley et al., 2018). Interestingly, some researchers consider HARKing a questionable research practice [QRP; e.g., John et al. (2012)], while others have deemed it essential for science as long as this analysis is explicitly labeled as exploratory (as opposed to confirmatory) and replicated in another study (Hollenbeck & Wright, 2016; Simmons, Nelson, & Simonsohn, 2013). 

What is the consequence of this? p-hacking. In Null Hypothesis Significance Testing we calculate the p-value, the probability of observing an obtained positive result, or one which is greater than it, under the null hypothesis (Chambers, 2019, p. 24). In psychology we aim for a p-value lower than .05 in order to reject the null hypothesis (but you probably already knew that). However, in practice, researchers can push the p-value to just after the cutoff point using the methods above (Head, Holman, Lanfear, Kahn, & Jennions, 2015). This is interesting because although the p-value has crossed the threshold point, in reality the result is not significant. This is because of ‘multiple comparisons’ where the data is examined many times and causes the ‘real’ threshold to increase. Researchers should then use smaller p-values, but they do not (Drachman, 2012). 

However, I found that there are solutions to these problems, such as pre-registration (Simmons, Nelson, & Simonsohn, 2020; Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012), which requires researchers to state their research question, methods, analyses and predicted results prior to data collection. If they require feedback on their plan, they can submit a registered report, and receive in-principle acceptance, where the journal agrees to publish their paper regardless of results (Chambers, 2013). These plans also require planning the sample size (Simmons et al., 2013) and exclusion criteria for data points (Nosek et al., 2019). You can conduct an a-priori power analyses to calculate your sample size (Wicherts et al., 2016). Tools like G*Power (Faul, Erdfelder, Buchner, & Lang, 2009) help to determine the minimum sample size necessary to find a true-positive by entering details about the experimental design. Practically speaking, this will help you adjust your experimental design based on your budget while maintaining a high power. For example, you could choose to collect more data per participant, run fewer participants and thus increase your statistical power. 

This is part of the greater open science reforms, which strive to make science accessible and transparent (Flake, 2021). What does this mean for psychology? By submitting a registered report or pre-registration, researchers make their aims and methods clear. The new guidelines prioritize good research practices and robustness over result novelty, so the false discovery rate should decrease and findings should successfully replicate. Additionally, when data and code are published, researchers can question the work of other researchers. To maximize reproducibility, they should follow the “21 Word Solution” which states that everything should be included: spare no detail of your experimental design and statistical analyses (Simmons, Nelson, & Simonsohn, 2012; Stanley et al., 2018). While this is extra work for already busy researchers, it is a step in the right direction and is ultimately better for science. 

Please let me know what you think ASAP! How will you implement these new practices into your own research? 

Best, 

Dr. Phoebe Enns

Sunday December 12, 2021 

Hello Dr. Enns, 

I am devastated to read your views on this topic! This so-called “replication crisis” has caused so much pain. In my view it is more of a reputation crisis! The “open access” (OA) and “replication” culture has led some researchers to doubt their own work and be treated like “criminals” because others have failed to replicate their results (Schnall, 2015). This immediately leads to accusations of fraud or scientific misconduct, while ignoring two important things. Firstly, the possibility that it is the replications which have a low statistical power rather than the original experiment (Maxwell et al., 2015); and secondly, regression to the mean predicts that effect size of successful replications are smaller than the original studies (Lehrer, 2010). This open science movement is terrible and has led to the creation of websites such as replicationindex.com, which is a smear campaign against highly reputable researchers. Already on the home page I read negative words like “untrustworthy” and “outlandish claims”, while the rest of the site is dedicated to targeting figures such as John Ioannidis though articles like “Ioannidis is Wrong Most of the Time” (Schimmack, 2020). How is this acceptable? 

Lack of professionalism aside, even if I intended to follow OA guidelines, I would not know where to begin. For example, the “Transparency, Openness, and Reproducibility guidelines” (TOP, Nosek et al., 2016) encourage researchers to disclose all data and code used to conduct the experiment and analyses. Further, the “Workflow for Open Reproducible Code in Science” (WORCS, Van Lissa et al., 2021) suggests that scientists work with reproducible manuscripts from the start: i.e., that they work with something called R Markdown which allows them to write their papers and code in the same document, thus reducing human errors and allowing for easy reproducibility. This is intriguing but how does it even work? I am far too busy to learn new skills! Likewise, pre-registering will take a long time. I will have to prepare the required information (such as sample size) which is time consuming. If I choose to submit a registered report, I will have to wait at least 8 weeks for feedback before I can start collecting data (Chambers, Dienes, McIntosh, Rotshtein, & Willmes, 2015). If I preregister instead, should I do it on the Open Science Framework (OSF, Foster & Deardorff, 2017) or on AsPredicted.org? How do I know which is the right one? The lack of a clear path makes me doubt how valid the open science reforms are. 

In contrast to these unclear guidelines, scientific journals, which have been used since the 17th century (Kronick, 1988), have clear guidelines on publication and do not require transparency as a prerequisite for peer-review. Although some of them encourage potential authors to follow these open science guidelines, these are not always enforced (Chambers, 2019, p. 80), and many journals do not encourage them! For example, 35 out of the 50 high-impact journals sampled by Nosek et al. (2021) did not require data transparency and 41 did not require analysis transparency. Therefore, I suspect that these rules were put in place to appease some researchers. OA journals, such as PLOS ONE have a lower prestige (Chambers, 2019, p. 128). I personally prefer to publish in Nature or Psychological Science, which are not fully OA and give me more freedom to analyze my data according to my own judgment (Chambers, 2019, p. 130). In addition to this, they are more prestigious and you get your money’s worth by publishing there (Corbyn, 2013). 

I hope that I have convinced you to drop these radical new ideas! 

Sincerely, 

Dr. Reed Addens

Tuesday December 14, 2021 

Dear Dr. Addens, 

I read your concerns, and they are valid! After all, open science is a new concept in psychology, and has evidently caused a lot of public pain and doubt. However, I do recommend that you read Simmons et al. (2020), they address many of your worries! For example, they explain how pre-registration saves you time because all your analyses and parts of your future paper are already written out. It does not take more time: it shifts the effort onto the earlier stages of work. 

Regarding your concerns about reputation – thankfully I have not heard of anyone who has lost their job due to a replication failure. Only those who committed fraud, such as Diederik Stapel, lost their jobs. Stapel was a professor at Tilburg University who faked data for 55 peer-reviewed studies (“Committee Levelt | Tilburg University,” 2012; Markowitz & Hancock, 2014). Replications are not personal vendettas but are meant to improve research. 

Furthermore, I agree that there is not one clear path for pre-registration. However, this is expected because the idea is still in its early stages (Chambers, 2019, p. 198). It will develop over the next few years, and we will find the best fit. 

Unfortunately, I disagree with your stance on the authority of journals because they are now corrupt. They do not want what is best for science, but only what is best for them and their publishers’ pockets. For example, the peer-review system is used to provide criticism on others’ unpublished work in order to ensure its quality (Alberts, Hanson, & Kelner, 2008), but journals have used this system to exploit researchers. They are expected to peer-review papers and are not paid to do this. It is estimated that “reviewers globally worked on peer reviews [for] over 100 million hours in 2020”, labor which is valued at over 1.5 billion USD (Aczel, Szaszi, & Holcombe, 2021). Meanwhile, journals and their publishers profit from this free labor. For example, take Dr. Arthur C. Evans JR., the CEO of the American Psychological Association (APA). Their journals rely on peer-review and include Psychology & Neuroscience, Experimental and Clinical Psychopharmacology and Journal of Threat Assessment And Management. Dr. Evans made over 900,000 USD in 2019 and did not pay income tax due to the APA’s exemption (“Financial Information,” 2019: p. 7). Interesting how his salary is so high, while peer-reviewers do not get paid. 

On the topic of openness and transparency, like you said: many journals do not encourage OA behaviors, and they suffer from publication bias, meaning that they accept only papers with novel, interesting or weird findings (Hardwicke et al., 2020). This could nudge researchers towards QRPs so that they get published. Furthermore, they almost punish researchers who want to make their work OA. For example, journals charge authors an extra fee for making their work OA, known as the Article Publishing Charge (APC). By doing this, their work is made accessible to all immediately with publication. The alternative is waiting for 12 months, where it will become OA for free (Chambers, 2019, pp. 128–129), which might seem appealing considering that the APC is pretty hefty. For example, for Methods in Psychology, it is 1,800 USD before taxes (“Methods in Psychology – Journal – Elsevier,” n.d.), it is 6,700 USD for Trends in Cognitive Sciences (“Cell Press,” 2021) and 5,560 USD for Nature Communications (“Article Processing Charges | Nature Communications,” 2021). Luckily, journal fees are being contested. Universities in the Netherlands have reached a deal with Elsevier where Dutch researchers do not have to pay an APC (“Open Access Agreement for VSNU (Nl) | Elsevier,” 2020). In the US, Harvard’s Faculty of Arts and Sciences introduced an OA mandate in 2008. This forces researchers to make their work accessible to all, and cheaper for the university to publish (Priest, 2012). This allows the public (e.g., journalists, doctors and teachers) who financed this research to access its article for free! Chambers (2019) estimates that without a subscription, the average access fee is 30 USD (p. 127). In short: journals profit by not paying for peer-reviews, they then charge authors for publishing their work, for making it OA and later for accessing it (if it is not OA). The taxpayer, who funds this research, has to pay a journal in order to see the outcome. 

The fight for OA is just getting started! Over the past few years scholars have developed new modes of communicating information. For example, the online platform arXiv allows researchers to openly publish pre-prints of their articles (McKiernan, 2000). This allows them to share their ideas regardless of if or when a scientific journal accepts their manuscript. There is also Sci-Hub, which is a repository founded in 2011 by Alexandra Elbalkyan, and researchers to bypass journals’ paywalls to access most published articles (Himmelstein et al., 2018). Clearly, journals are threatened by researchers’ wishes to make their work accessible. For example, Nature has expressed its discontent with Sci-Hub (Alexandra Elbakyan, 2021b; Else, 2021), and their Twitter account was suspended (Alexandra Elbakyan, 2021a). This just means that the revolution is working!! Additionally, scholars start to use less formal and non-academic platforms to communicate. Many researchers use social media platforms such as Twitter to share their ideas, research and data, as well as to interact with other researchers (Álvarez-Bornstein, 2019; Letierce, Passant, Decker, & Breslin, 2010). Eventually, these new communication channels could replace the current ‘traditional’ modes of communication such as conferences or emails. Maybe in 2030 it will be acceptable to have your Twitter handle on your CV or lab website rather than your email address – who can tell? 

Let me know if you make an account, so we can be Twitter buddies! 

Dr. Phoebe Enns

Monday December 20, 2021 

Hello Dr. Enns, 

I have to say I had no idea that journals were so corrupt. I relied on them because I felt that they provide uniformity and order, that their guidelines make sure that all scientific papers look similar and are therefore easier to navigate through. But reading your letters has me doubting this completely. I think that science is at a stage where researchers can come together to form models that can replace journals. It seems that we are already coming up with ways to improve research by making everything transparent and accessible, despite the discontent of journals. I think that the next step would be to create a system which takes peer-reviews into our own hands and make them more transparent and uniform. Combine that with the alternative methods of publication you mentioned above, and (non-OA) journals become redundant in the scientific ecosystem. Science is much better without them. 

Your arguments are very compelling, and I have decided to give these open science practices a try. If everybody else is doing it, then what have I got to lose? After all, I want what is best for science. I will therefore submit a registered report for my next experiment (feedback is always good!) and learn how to use G*Power and everything necessary to improve the quality of my discoveries. It will probably take me a few tries to get it right. I have already learned the basics of R Markdown by following the instructions written by Alzahawi (2021) and was able to compile this letter to send to you! 

Hope to hear from you again soon! 

Dr. Reed Addens

Student Initiative for Open Science

This article has been written as part of an ongoing collaborative project with the Student Initiative for Open Science (SIOS). The Amsterdam-based initiative is focused on educating undergraduate- and graduate-level students about good research practices.

References

I have used R (Version 4.2.2; R Core Team, 2022) and the R-packages papaja (Version 0.1.1.9001; Aust & Barth, 2022), and tinylabels (Version 0.2.3; Barth, 2022) with Zotero (Version 5.0.96.3) for writing my paper and citing my sources. 

  • Aczel, B., Szaszi, B., & Holcombe, A. O. (2021). A billion-dollar donation: Estimating the cost of researchers’ time spent on peer review. Research Integrity and Peer Review, 6 (1), 14. https://doi.org/10.1186/s41073-021-00118-2
  • Alberts, B., Hanson, B., & Kelner, K. L. (2008). Reviewing Peer Review. Science, 321 (5885), 15–15. https://doi.org/10.1126/science.1162115
  • Alexandra Elbakyan. (2021a, December 12). On January, 6 2021 Twitter has banned Sci-Hub account @Sci_Hub with 185K followers. People are currently trying to contact Twitter to unblock it. Until then you will be able to get Sci-Hub news here: @Sci_Hub_tweets [Tweet]. Retrieved December 16, 2021, from https://twitter.com/ringo_ring/status/1469844242998693888
  • Alexandra Elbakyan. (2021b, December 14). Nature has actually contacted me for comment about accusations that Sci-Hub is a threat, here is my full response / it is clear that academic publishers care about their money, not about security of other people https://t.co/f3LvOK46lf [Tweet]. Retrieved December 16, 2021, from https://twitter.com/ringo_ring/status/1470815566160179201
  • Álvarez-Bornstein, B. (2019). Who is interacting with researchers on Twitter? : a survey in the field of Information Science. JLIS.it, 87–106. https://doi.org/10.4403/jlis.it-12530
  • Alzahawi, S. (2021, July 11). Writing reproducible manuscripts in R. Retrieved December 20, 2021, from https://shilaan.rbind.io/post/writing-reproducible-manuscripts-in-r/ Article processing charges | Nature Communications. (2021). Retrieved December 16, 2021, from https://www.nature.com/ncomms/article-processing-charges
  • Aust, F., & Barth, M. (2022). papaja: Prepare reproducible APA journal articles with R Markdown. Retrieved from https://github.com/crsh/papaja
  • Barth, M. (2022). tinylabels: Lightweight variable labels. Retrieved from https://cran.r-project.org/package=tinylabels
  • Berman, R., Pekelis, L., Scott, A., & Van den Bulte, C. (2018). P-Hacking and False Discovery in A/B Testing (SSRN Scholarly Paper No. ID 3204791). Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.3204791 Cell Press: Cell Press. (2021). Retrieved December 16, 2021, from https://www.cell.com/rights-sharing-embargoes
  • Chambers, C. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49 (3), 609–610. https://doi.org/10.1016/j.cortex.2012.12.016
  • Chambers, C. (2019). The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice. In The Seven Deadly Sins of Psychology. Princeton University Press. https://doi.org/10.1515/9780691192031
  • Chambers, C., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered Reports: Realigning incentives in scientific publishing. Cortex, 66, A1–A2. https://doi.org/10.1016/j.cortex.2015.03.022 Committee Levelt | Tilburg University. (2012). Retrieved December 20, 2021, from https://www.tilburguniversity.edu/nl/over/gedrag-integriteit/commissie-levelt
  • Corbyn, Z. (2013). Price doesn’t always buy prestige in open access. Nature. https://doi.org/10.1038/nature.2013.12259
  • Drachman, D. (2012). Adjusting for Multiple Comparisons. Journal of Clinical Research Best Practices, 8 (7), 1–3.
  • Else, H. (2021). What Sci-Hub’s latest court battle means for research. Nature, 600 (7889, 7889), 370–371. https://doi.org/10.1038/d41586-021-03659-0
  • Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41 (4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149 Financial Information. (2019). Retrieved December 17, 2021, from https://www.apa.org/about/finance
  • Flake, J. K. (2021). Strengthening the foundation of educational psychology by integrating construct validation into open science reform. Educational Psychologist, 56 (2), 132–141. https://doi.org/10.1080/00461520.2021.1898962
  • Foster, E. D., & Deardorff, A. (2017). Open Science Framework (OSF). Journal of the Medical Library Association : JMLA, 105 (2), 203–206. https://doi.org/10.5195/jmla.2017.88
  • Hardwicke, T. E., Serghiou, S., Janiaud, P., Danchev, V., Crüwell, S., Goodman, S. N., & Ioannidis, J. P. A. (2020). Calibrating the Scientific Ecosystem Through Meta-Research. Annual Review of Statistics and Its Application, 7 (1), 11–37. https://doi.org/10.1146/annurev-statistics-031219-041104
  • Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The Extent and Consequences of P-Hacking in Science. PLOS Biology, 13 (3), e1002106. https://doi.org/10.1371/journal.pbio.1002106
  • Himmelstein, D. S., Romero, A. R., Levernier, J. G., Munro, T. A., McLaughlin, S. R., Greshake Tzovaras, B., & Greene, C. S. (2018). Sci-Hub provides access to nearly all scholarly literature. eLife, 7, e32822. https://doi.org/10.7554/eLife.32822
  • Hollenbeck, J. R., & Wright, P. M. (2016). Harking, Sharking, and Tharking: Making the Case for Post Hoc Analysis of Scientific Data. Journal of Management, 43 (1), 5–18. https://doi.org/10.1177/0149206316679487
  • John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23 (5), 524–532. https://doi.org/10.1177/0956797611430953
  • Kronick, D. A. (1988). Review of A Historical Catalogue of Scientific Periodicals, 1665-1900, with a Survey of Their Development. Libraries & Culture, 23 (2), 243–245. Retrieved from https://www.jstor.org/stable/25542063
  • Lehrer, J. (2010, December 6). The Truth Wears Off. Retrieved December 12, 2021, from https://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off
  • Letierce, J., Passant, A., Decker, S., & Breslin, J. (2010). Understanding how Twitter is used to spread scientific messages.
  • Markowitz, D. M., & Hancock, J. T. (2014). Linguistic Traces of a Scientific Fraud: The Case of Diederik Stapel. PLOS ONE, 9 (8), e105937. https://doi.org/10.1371/journal.pone.0105937
  • Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70 (6), 487–498. https://doi.org/10.1037/a0039400
  • McKiernan, G. (2000). arXiv.org: The Los Alamos National Laboratorye-print server. International Journal on Grey Literature, 1 (3), 127–138. https://doi.org/10.1108/14666180010345564
  • Methods in Psychology – Journal – Elsevier. (n.d.). Retrieved December 16, 2021, from https://journals.elsevier.com/methods-in-psychology
  • Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S., Breckler, S., . . . DeHaven, A. C. (2016). Transparency and Openness Promotion (TOP) Guidelines. https://doi.org/10.31219/osf.io/vj54c
  • Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., . . . Vazire, S. (2019). Preregistration Is Hard, And Worthwhile. Trends in Cognitive Sciences, 23 (10), 815–818. https://doi.org/10.1016/j.tics.2019.07.009
  • Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Almenberg, A. D., . . . Vazire, S. (2021). Replicability, Robustness, and Reproducibility in Psychological Science. https://doi.org/10.31234/osf.io/ksfvq
  • Open Access Agreement for VSNU (Nl) | Elsevier. (2020). Retrieved December 16, 2021, from https://www.elsevier.com/open-access/agreements/VSNU-NL
  • Priest, E. (2012). Copyright and the Harvard Open Access Mandate. Northwestern Journal of Technology and Intellectual Property, 10 (7), 377–440. Retrieved from https://heinonline.org/HOL/P?h=hein.journals/nwteintp10&i=408
  • R Core Team. (2022). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project.org/
  • Schimmack, U. (2020, December 24). Ioannidis is Wrong Most of the Time. Retrieved December 12, 2021, from https://replicationindex.com/2020/12/24/ioannidis-is-wrong/
  • Schnall, S. (2015, June 23). Simone Schnall on her Experience with a Registered Replication Project | SPSP. Retrieved December 12, 2021, from https://www.spsp.org/news-center/blog/simone-schnall-on-her-experience-with a-registered-replication-project
  • Simmons, J., Nelson, L. D., & Simonsohn, U. (2012). A 21 Word Solution (SSRN Scholarly Paper No. ID 2160588). Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.2160588
  • Simmons, J., Nelson, L., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22 (11), 1359–1366. https://doi.org/10.1177/0956797611417632
  • Simmons, J., Nelson, L., & Simonsohn, U. (2013). Life after P-Hacking. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2205186
  • Simmons, J., Nelson, L., & Simonsohn, U. (2020). Pre-registration: Why and How. Journal of Consumer Psychology, 31 (1), 151–162. https://doi.org/10.1002/jcpy.1208
  • Stanley, T., Carter, E. C., & Doucouliagos, H. (2018). What Meta-Analyses Reveal About the Replicability of Psychological Research. Psychological Bulletin. https://doi.org/10.1037/bul0000169
  • Van Lissa, C. J., Brandmaier, A. M., Brinkman, L., Lamprecht, A.-L., Peikert, A., Struiksma, M. E., & Vreede, B. M. I. (2021). WORCS: A workflow for open reproducible code in science. Data Science, 4 (1), 29–49. https://doi.org/10.3233/DS-210031
  • Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An Agenda for Purely Confirmatory Research. Perspectives on Psychological Science, 7 (6), 632–638. https://doi.org/10.1177/1745691612463078
  • Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking. Frontiers in Psychology, 7, 1832. https://doi.org/10.3389/fpsyg.2016.01832
SIOS Editors

Author SIOS Editors

SIOS editorial staff.

More posts by SIOS Editors