Welcome to the interesting not-quite-debate on scientific publishing
Debates are a tricky thing. At the beginning of the debate about Scientific Publishing at the Heidelberg Laureate Forum Today, it looked very much as if the organisers had forgotten to invite parties from the other side of an important issue. As each of the participants laid out their theses – Gerard Meijer (Director of the Max Planck Society’s Fritz Haber Institute in Berlin) began by opening up the topic of open access, still a key problem in scientific publishing: Given that most scientific research is paid for by the tax payer, shouldn’t it be accessible to all – and not hidden behind the paywalls of (commercial) publishers? But in discussing that topic, why hadn’t the organisers invited a participant representing the publishers?
A matter of framing
But that, in a way, was misleading framing. The organisers, I learned later, did not mean to center the discussion on the Open Access question. Instead, they intended a focus on the needs of science for the publishing practices of tomorrow. Remembering the debate with this additional information, my perspective has shifted retrospectively. So let’s focus on those parts of the debate that didn’t concern open access. Naturally, this will be a subjective selection. Here goes.
What should we publish in the first place? Let’s not get bogged down merely thinking about documents. Gabriele von Voigt (with an impressive career both in academia and in industry; now a professor for Computational Health Informatics at the University of Hannover) opened up the field wide in talking about best practices for publishing scientific data (and the associated meta-data). Check out the FAIR website for details – mentally checking how those apply to astronomy, I am all in favour: Yes, I would like the scientific data I am interested in to be findable, the “F” in “FAIR”, with suitable metadata, included in searchable resources. I would like the data to be accessible, “A”, via a simple protocol. I would like the data to be “I”, interoperable. As an astronomer, I have become used to image data being in the standard FITS format, for instance, but such standard formats are apparently not available yet in all fields of science. And of course they should be “R”, reusable for me and others, which requires proper documentation.
Staying with the “what,” Julie Williamson, of the University of Glasgow, took matters one step further: We need to preserve artifacts, as well. Makes sense to me: When it comes to hardware development, for instance, we need to preserve more than just the publications and descriptions. We should preserve prototypes.
The submission-reviewer gap
Williamson brought up another important point. In the open access part of the debate, Klaus Hulek, Editor-in-Chief of “Zentralblatt der Mathematik” (an annotated data base of mathematics papers that is the envy of scientists in other fields) had described the changing role of publishers: less about dissemination (now that the internet has made dissemination that much easier) and more about quality control. But, as Williamson pointed out, article submissions are up by 10% per year, while the reviewer pool is growing by a mere 2%. We have a problem.
Williamson’s proposed solution: We should publish less, but better. Of course, that means finding alternative criteria for evaluating academic achievement, other than quantitative measures where publishing more leads to a higher ranking. (Fields medalist Efim Zelmanov pessimistically predicted that for any algorithm for evaluation, large universities would find a way of gaming the system; Joseph Konstan of the University of Minnesota, Co-Chair of the Publications Board of the Association for Computing Machinery pointed out that instead of relying on algorithms, or indices, institutes should rely on their own resident experts.)
As Meijer pointed out: Germany’s national research funding organisation, Deutsche Forschungsgemeinschaft (DFG), changed their rules a while ago to discourage excessive publishing. Since then, in certain parts of applications, scientists are only allowed to list their five best articles (of the past years). A process that rewards scientists who publish fewer, but better papers, over those who go for quantity at the expense of quality. If you are interested: When I tweeted this, DFG tweeted back this link:
The first moves were made in 2010 ➡️ https://t.co/qsPoTiuwI0. On the modifications of the #DFG's Senate in 2014 see ➡️https://t.co/XuGUe341Jz. And here are the Guidelines for Publication Lists ➡️ https://t.co/U4aCiP9fEF— DFG public (@dfg_public) September 23, 2019
…apparently, the practice goes back to 2010!
Was this event, whose content as well as form I tried to capture in this blog post, a debate? Probably not in the strict sense of the word. It was definitely interesting, but more in the form of a brainstorming session. Real panel debates are probably harder than one might think.