The pandemic has been an education, for many non-scientists, in how imperfect science can be. Some treatments for COVID-19 rode one wave of promising research only to be thrown down by the next. Those reversals have occasionally, for the lay public, created distrust in researchers and institutions. But for Nicholas Holmes, a neuroscientist at the University of Nottingham in the United Kingdom, the answer to be more public about failures in science—not less.

Holmes has practiced what he preaches. On Good Friday, a day traditionally given over to reflecting on sins, he took to Twitter and critiqued each one of his own 57 published papers. He noted errors and biases in who was credited as an author. In some cases, he shared qualms about the process or result. While reactions to these admissions were mixed, Holmes hopes, as he stated in a recent essay in Nature, that poking holes in the illusion of scientific infallibility will have a beneficial effect for the field and its practitioners.

Q: Around the time you published your letter, a team of Dutch researchers released data from a study about research integrity. More than half of the scientists queried said that they engaged in research practices that are questionable, with almost one in 10 committing acts that might be considered fraudulent. What do you think is behind researchers acting in these ways?

A: You can’t understate the pressure of publication on scientific careers. I’ve worked where it was just the institutional folklore that, if you want a faculty position, you’ve got to have a paper in Nature or Science. That was said in the coffee rooms and it was implied in meetings. So when a junior researcher hears this, no matter how passionate she is about the principals of good science, she knows that if she doesn’t get the right results, if she doesn’t get the papers, she can’t progress in a scientific career.

But the idea that a career should be dependent on certain work being published in particular journals—that goalpost is counterproductive for all of us. It doesn’t talk about what kind of science we should be doing and how we should report our data. And it leads us towards well-established biases. We know that research is more likely to be published if it finds something new and exciting. But it can be just as valuable—more valuable—to find out you were wrong, or that a leading hypothesis in the field is wrong.

Q: How did you come up with the idea for your Good Friday thread that critiqued your own papers?

A: In November there had been a rather dramatic episode in the open science community. These are people working towards more transparency in how science is funded, who gets credit for it and how research data gets shared. One project, Curate Science, involved a platform that would evaluate a paper across a number of these metrics. It was a very good idea, funded by the European Union. Then the architects of this platform went a step further. They created a leaderboard of scientists who were ranked based on how transparent their work was, how much they shared data, disclosed conflicts of interest and so on.

That became an effort, essentially, to shame people into doing better science. As you can imagine, this didn’t go down well. Even the top names on the leaderboard felt they were being pressured into proving they were “open” enough. For some reason it reminded me of the stoning scene in the Monty Python movie, The Life of Brian. The mob turns on a dime, stoning whatever person makes a tiny new infraction of the rules. Most of us would say that if anyone is going to start throwing stones at other researchers, then he should be without sin himself. And I don’t think most of us, as that Dutch survey shows, are completely without sin in regards to our own research.

So I happened to find myself alone on a Friday evening and thought, “What if I preempt the Spanish inquisition? If I dig into my research history, what is the worst I would find?”

Q: You’ve referenced something called the Loss of Confidence project, where psychology researchers can go to share significant doubts about their past published research. Most of your Twitter thread about your work cites minor objections, but are there any of your papers you would declare a total “loss of confidence” in?

A: My only FMRI [functional magnetic resonance imaging] paper so far—I wouldn’t mind if that was stricken from the record. It was all quite open and I’ve been happy to share the data on it. In my series of tweets, I said that I doubted it would replicate and that the effects are quite weak. And I hadn’t pre-specified the analysis I ended up doing in advance, which is not good science. I don’t think I need to retract it yet, although maybe someone will ask me to do so. But I don’t place great confidence in that result.

Q: Can institutions—universities, private industry or publishers—do more to promote an environment of self-critique?

A: Our first goal should be to get rid of some of the perverse incentives that cause people to publish science that isn’t sound. The metrics we use to assess people for hiring decisions—the bean counting approach of how many papers and how many citations you’ve got—those have got to go. It’s very easy just to use a number from a third-party supplier that ranks how important a journal is, match that number to your candidate’s CV and have that number shape your shortlist. But there are many excellent scientists whose work is just much less popular than others’, who won’t perform well under that rubric, and it’s nothing to do with how brilliant they are.

We would be better off with some assessment of the quality of a scientist’s work. I admit that’s harder. But if perhaps the hiring committee or admissions board for PhD candidates did, say, a deep read of one piece of work from each applicant. We could look at its complete data and do a real, critical investigation of that person’s process. At the very least that would signal the change that we want. It does create more work, but I think it’s useful work.

Q: You’ve also suggested something like a “negative CV”—that scientists create a list of their failures. How would that be useful?

A: We’ve all seen successful academics give talks. We introduce the professor from Harvard with an endless string of successful grants, an expert with 40 years of good work on a topic. And graduate students in the audience get the impression that, to be a good scientist, you just have to be extremely successful from start to end. What if that person presented an honest list of disappointments? You know, “Here are all the times I’ve failed.” It would be nice to normalize the fact that one in 10 grants gets funded, and that each paper may go through two or three or four journals before publication, and that maybe half of your experiments don’t produce meaningful results. If some successful people presented a negative CV to young scientists, that could be a good start. And I’ve seen people do that. It helps to model the change in attitudes that we want to see.

Q: The pandemic revealed a major shift in public confidence towards the sciences. People have voiced doubts about vaccines, masking and various treatments, often citing the fact that not all formally conducted studies agree. If scientists admit errors more publicly, doesn’t that make that problem of public confidence worse?

A: It seems not. A recent paper found that self-criticism and self-correction actually increases the public’s trust in science. That seems intuitive to me and fits the old saying: you should believe those who say they seek the truth, but doubt those who say they’ve found it. The skeptics we hear about in the media – those who doubt climate change, the effectiveness of vaccines or even the existence of COVID-19 – I don’t worry about them too much. There have always been conspiracy-theorists and cranks, but we should remember they’re a small minority. Scientists just need to keep putting out their unvarnished findings along with any uncertainty. We’ll win through in the end.