The Pressure to Publish Is Challenging the Foundations of Academic Integrity
BLOG: Heidelberg Laureate Forum
According to an analysis in Science, around 1.92 million papers were indexed by the Scopus and Web of Science publication databases in 2016. In 2022, that number rose to 2.82 million. The authors of that analysis asked themselves whether this is eroding trust in science; can all this research be genuine and trustworthy?
At the 12th Heidelberg Laureate Forum, which took place in September 2025, a panel of mathematicians, publishers, and research integrity experts tried to address this thorny situation. Their discussion revealed a complex ecosystem for academics caught between the need for openness and the pressure to perform in the publishing arena.
Publish or Perish
The modern academic machine runs fast. In the “good old days,” researchers could take their time and immerse themselves in their work. You would sometimes discuss with reviewers directly and engage in complex back-and-forth, says Yukari Ito, a mathematician and Professor at Nagoya University.
Nowadays, this is no longer the case. The “publish or perish” mantra has firmly gripped the academic world. Papers are expected to appear in rapid succession, each feeding the next grant application or promotion review.
“There are many papers published every day, which also puts pressure on us to publish many papers,” Ito says. “We have to be careful with this evaluation. Many people look at the number of papers and citations and so on, she added.”
Eunsang Lee, who works in the research integrity group at Springer Nature, also acknowledges this issue and says the pressure is coming from the research institutions.
“We understand that there is pressure to publish many papers, especially from the institution or funding bodies,” he said. “We are trying to work together closely with institutions and funding bodies to release this kind of pressure, but it’s a very long way to go. It also depends on the culture and country. As a publisher there’s not much we can do, but we try to provide a safer platform for researchers.”
This pressure is not without consequences, and the consequences are felt most by young researchers. Data consistently shows that PhD students suffer severe mental strain and are at high risk of depression and anxiety. For many young researchers, mental health problems have become “normal” parts of research.
At the same time, this approach can end up prioritizing quantity over quality, resulting in a flood of papers. Some of these papers are good or excellent; some are repetitive or inconsequential; and some are outright fraudulent.
How Bad Is Academic Fraud?
There is no way to tell just how widespread academic fraud is. Lee, whose main task is to analyze data from potentially problematic papers or authors, says he was not originally aware of this problem. “After joining the publishing industry, I realized that academic integrity is a serious problem.”
There are many types of breaches, Lee says. “Data fabrication is one example. For publishers, this is difficult because identifying data fabrication requires both expertise and in-depth analysis. The problem is that we are getting this at a massive scale, so it’s really hard to tackle.”
But aside from the classic issues, newer risks have also emerged, in particular due to the rise of AI. AI can mask plagiarism under clever paraphrasing, or it can generate or edit images (sometimes, with hilarious results). It can also facilitate chains of citations, connecting unrelated papers in an artificial web of self-reference.
“AI can generate text and images, which can be a big problem and result in obviously fake science. We also see many cases of irrelevant references because some people include self-citations of unrelated work or some other irrelevant references to inflate their citation profile. It’s a big problem and also hard to tackle as a publisher.”
There is no simple way to weed out these problems, but it is not impossible, either.
The Rise of the Scientific Sleuth
For Lonni Besançon, an Assistant Professor of Visualization at Linköping University in Sweden, and alumnus of the HLF, questions of integrity became personal during the pandemic.
“I got into misconduct finding and looking at it because some of the papers that I was reading during COVID were a bit problematic,” he says. “I have a methodological background also, so I started looking into this more, and I’ve now discovered quite a few problematic papers. Around a few thousand, in different fields, not in mine. I’ve been reporting on them for a while and talking about this.”
Besançon belongs to a growing group of volunteer “scientific sleuths” – researchers who spend their free time identifying falsified data, image duplications, and plagiarized manuscripts. Many of them have exposed widespread misconduct, including fake journals and so-called “paper mills” that mass-produce fraudulent articles for paying clients. Science greatly benefits from this process, but this is largely unpaid, underappreciated, volunteer work.
Assessing scientific discovery has never been easy, and the need for better metrics is clear. Yet, for Besançon, the core issue is the idea of metrics itself.
“The day something becomes a metric is the day people start gaming that. Personally, I’m against all kinds of metrics. I see the point of citations …, but I think if we make a metric, eventually people will game it. It’s always been the case.”
The French researcher also emphasizes that when a problem is spotted, this is not always because of bad intent. Honest mistakes obviously happen. However, retractions or corrections have come to be seen as damning for scientists, when in fact, they are a normal part of the scientific process.
“We shouldn’t see the correction of papers or the retraction of papers as a problem. If there’s a mistake in one of our papers, we should correct it because this is the body of knowledge that we’ve created for the world,” says Besançon.
Lee echoed that sentiment: “Don’t be afraid of retractions and corrections caused by honest errors. Everyone can make a mistake. Publishers always try to distinguish between corrections made by honest errors and research integrity. So don’t worry about this. Post-publication actions by authors are the healthiest way to achieve a high standard of research today. That’s something I want to mention especially for young researchers.”
So What Do We Do?
Maintaining trust in a system that produces nearly three million papers a year is an enormous challenge. Transparency is a good way to start. If research integrity is the foundation of science, then transparency is its scaffolding. It supports trust not only between researchers and their peers, but between science and the public.
“Transparency is key. When in doubt, share. That’s one thing I can say confidently. If you can share, share. If you can’t, for instance if you have anonymization problems, just explain why you’re not sharing. Be transparent with everything as much as you can.”
This ethos of openness, honesty, and accountability was echoed across the panel. It is the best defense scientists have at their disposal to showcase the value of their work.
“It’s obvious that you have to ensure fully transparent authorship, including authorship changes, and also you have to declare any conflict of interest and cite your sources. The next thing I would say, which is especially true in the case of mathematics and computer science is to share your code and data availability as much as you can,” says Lee.
Yet, even as researchers and publishers confront misconduct, the structure of academic incentives remains unchanged, pushing for more papers. Ito suggests that perhaps the entire system should rediscover the patience that once defined scholarship: to slow down, to prioritize originality over output, and to treat retractions and corrections as acts of integrity and not as failure.
“For young researchers, publishing your first paper is very important,” she said. “You have to find one big problem. But then, you should continue your research by being original and interesting.”
Ultimately, transparency and integrity alone will not solve a systemic problem rooted in incentives. Until universities, funders, and publishers work together to devise a system that truly rewards rigor and value over volume, the pressure cooker of modern academia will keep boiling.




You are living in an inflation bubble about to pop: Its main characteristics is over-specialization resulting in all kinds of things escalating into sheer idiocy. Which means, everyone has learned to do exactly one thing and do it perfectly, but sacrificed all their ability to do anything else – even the slightest potential for change has been rationalized away, being considered a flaw of a perfect cogwheel, meant to run within a perfect machine. Well, the machine is overheating and breaking apart, all the wheels are free to rotate as they please, so they just turn faster and faster, doing the only thing they can do, freed from any context that used to give them their function and meaning and purpose, flooding and inflating the balloon with the pointless bullshit they produce.
Well. Pull the plug or let it explode. If it doesn’t want to stop, it wants to die. Let it die, go through the debris, see what you can use, learn what you can learn and start from scratch.
Science is one of many small bubble universes within the big bubble multiverse, finding its own way to do what every bubble does – a distorted mirror image, a variation of the general principle, a local adaptation of the global DNA, coz that’s how fractals work. You can bet your ass your ass is doing a variation of the same right now, I just wonder if the Solar System or the Milky Way are affected or if it’s just a regional phenomenon on Earth. As such, scienceverse is one of many Petri dishes in a global lab, where you can find solutions that may or may not be adapted to solve similar problems within other bubbles – some will survive, most won’t.
I wonder if scientists can solve problems differently that the rest of the planet solves by all kinds of mass suicides. You need to slow down and reconsider. You need to withstand the explosions of all the popcorn around you. You need to be prepared to start anew asap, both in Mad Max world and in Hangover world, where nothing really bad has happened because we all just collapsed from exhaustion and let our bubbles evaporate, and all the possible shades of the Apocalypse in between.
Sticking to safe routines is the safest way to make things worse. You can’t stop a bomb from exploding by repairing or improving it, because it’s a bomb and not a bomb explosion preventing machine. If the system wasn’t working, it wouldn’t be working, prospering and booming, but the success that gives you the cookies, calls you a good doggy and trains you to stick to it is just enough to keep you going, the machine as a whole is doing too much of what you don’t want it to do and diverting resources from what you want it to do, success is feeding failure till failure becomes its doom. First question before all questions – what exactly do you want it to do?
The solution to research fraud could be an AI “foundation” model for research literature. So-called foundation models of artificial intelligence integrate different skills in a specific subject area (here literature research) and can take over various “downstream” tasks by means of post-processing. A foundation model for literature research in the field of research would know all the research papers of a given area and could correctly classify and compare new research papers. Such a model could carry out extensive reviews and deliver an actual report on a paper.