Open access publication in the spotlight - 'Replication studies in the Netherlands: Lessons learned and recommendations for funders, publishers and editors, and universities'
Date: | 18 November 2024 |
Author: | Open Access Team |
Each month, the open access team of the University of Groningen Library (UB) puts a recent open access article by UG authors in the spotlight. This publication is highlighted via social media and the library’s newsletter and website.
The article in the spotlight for the month of November 2024 is titled Replication studies in the Netherlands: Lessons learned and recommendations for funders, publishers and editors, and universities, written by a team of researchers led by Maarten Derksen (Faculty of Behavioural and Social Sciences).
Introduction
This paper was a collaborative effort of researchers involved in replication studies and four researchers doing an ethnographic study of those replication studies. It reflects the insights and opinions of all those people together. There is a lot of discussion about replication in science these days and most people think it is very important, but actually doing a replication study gives you a unique perspective on the complexity of the process. In practice, replication is difficult and hard work. As ethnographers, we see what connects the various studies and where there are differences. By combining that view from outside with the experiences of the replicators themselves we were able to write a paper that we think adds to the existing discussion.
We asked first and corresponding author Maarten Derksen a few questions about the article.
One of the ‘lessons learned’ is that replication studies highlight the importance of and need for more transparency of the research process. How can transparency be improved?
What many replicators experienced is that the method section of the original paper does not give enough information to set up a replication study. Particularly with the older studies, published at a time when journals had stricter word limits, replicators had trouble figuring out how exactly the experiment worked. Then they would contact the original researchers, but often they didn’t remember the details either. Nowadays many journals allow appendices with a detailed description of the experimental procedure. Word limits are less of an issue with online publishing. That helps, but it would be naive to think that you can ever solve this issue completely. Describing a procedure in enough detail to answer all possible questions about it is very difficult, if not impossible. At some point, the theory will tell you what matters and what doesn’t, but as long as the theory hasn’t crystallized yet, anything might be relevant. Being transparent is not like opening the curtains and everything becomes visible. You have to know what to tell people. It was fascinating to see that conducting a replication study made researchers very aware of this problem. There was a lot of discussion about it during the workshop that led to this paper.
How can replication studies be recognized and rewarded?
Replication studies are already getting more recognition than they used to, but a lot can still be improved. One of the reviewers of the paper objected to our recommendation that universities should appreciate replications when assessing researchers. Their point was that replication is just not considered innovative, and this is never going to change. But what we tried to make clear in the paper is that replication is not just duplicating someone else’s work. Replication studies give insight into a methodology and how it can be developed and improved. Replicators often introduced small changes in the procedure, a new version of an instrument for example, and when that led to different results it raised interesting questions regarding the methodology and the phenomenon being studied. But more generally the emphasis on innovative research should be balanced with equal attention to corroboration. Replication studies are important for working out the procedural details of an experiment and testing the robustness of findings. It is essential that a PhD student who spent four years meticulously trying to replicate a landmark study in the field gets equal recognition for that work as her colleague who conducted five studies that build on that same landmark study. Both are important.
You write that “Replication in the humanities is an idea worth exploring”. Can you explain why?
Replication is not a concept that is associated with fields like history or literary studies. These disciplines focus on interpreting the meaning of events and texts, not on measuring, explaining, and predicting their properties. Moreover, those interpretations can become a factor in later events and texts, they become a part of the culture that is being studied. These are two reasons -- there are more -- why research in the humanities is generally quite different from that in other fields. And yet there were several scholars from the humanities at our workshop who were conducting what you could call a replication [Ed., see this example]. They were ‘redoing’ earlier scholars’ work and although that came with all sorts of complications, they found it a rewarding experience. After all, the work of historians, for example, is often quite factual rather than just interpretative. They compile tables with numbers based on archival work, and others can replicate that work. And of course scholars in the humanities often revisit the studies of earlier scholars, rereading and reinterpreting the same sources. That is not called replication, but it is interesting to compare that sort of work to replication studies in other disciplines.
Could you reflect on your experiences with open access and open science in general?
We’re all in favour of open access, but I think I speak for all of us when I say that the current situation is a mess that allows the publishers to extract even more profit from the universities and the services of the academics who produce and review the content. As to open science: it has many facets and I think it is important to keep in mind that the various open science practices may not fit every discipline equally well. I’m a big fan of the buffet metaphor that was introduced by Christina Bergmann: take from open science what can help to improve the practices and processes in your field, but do not indiscriminately impose it as one big package on all fields of research. Science is diverse, we don’t all work the same way. Preregistration is not always useful, sharing qualitative data can be problematic, etc. It is important that we keep the diversity of science in mind when we discuss open science.
Useful links:
The blog of the ‘Replication in Action’ project
Citation:
Derksen, M., Meirmans, S., Brenninkmeijer, J., Pols, J., de Boer, A., van Eyghen, H., … de Winter, J. (2024). Replication studies in the Netherlands: Lessons learned and recommendations for funders, publishers and editors, and universities. Accountability in Research, 1–19. https://doi.org/10.1080/08989621.2024.2383349
If you would like us to highlight your open access publication here, please get in touch with us.