7 C
New York
Sunday, April 21, 2024

Scientific Fraud Is Slippery to Catch—but Easier to Combat

Like much of the internet, PubPeer is the sort of place where you might want to be anonymous. There, under randomly assigned taxonomic names like Actinopolyspora biskrensis (a bacterium) and Hoya camphorifolia (a flowering plant), “sleuths” meticulously document mistakes in the scientific literature. Though they write about all sorts of errors, from bungled statistics to nonsensical methodology, their collective expertise is in manipulated images: clouds of protein that show suspiciously crisp edges, or identical arrangements of cells in two supposedly distinct experiments. Sometimes, these irregularities mean nothing more than that a researcher tried to beautify a figure before submitting it to a journal. But they nevertheless raise red flags. 

PubPeer’s rarefied community of scientific detectives has produced an unlikely celebrity: Elisabeth Bik, who uses her uncanny acuity to spot image duplications that would be invisible to practically any other observer. Such duplications can allow scientists to conjure results out of thin air by Frankensteining parts of many images together or to claim that one image represents two separate experiments that produced similar results. But even Bik’s preternatural eye has limitations: It’s possible to fake experiments without actually using the same image twice. “If there’s a little overlap between the two photos, I can nail you,” she says. “But if you move the sample a little farther, there’s no overlap for me to find.” When the world’s most visible expert can’t always identify fraud, combating it—or even studying it—might seem an impossibility. 

Nevertheless, good scientific practices can effectively reduce the impact of fraud—that is, outright fakery—on science, whether or not it is ever discovered. Fraud “cannot be excluded from science, just like we cannot exclude murder in our society,” says Marcel van Assen, a principal investigator in the Meta-Research Center at the Tillburg School of Social and Behavioral Sciences. But as researchers and advocates continue to push science to be more open and impartial, he says, fraud “will be less prevalent in the future.”

Alongside sleuths like Bik, “metascientists” like van Assen are the world’s fraud experts. These researchers systematically track the scientific literature in an effort to ensure it is as accurate and robust as possible. Metascience has existed in its current incarnation since 2005, when John Ioannidis—a once-lauded Stanford University professor who has recently fallen into disrepute for his views on the Covid-19 pandemic, such as a fierce opposition to lockdowns—published a paper with the provocative title “Why Most Published Research Findings Are False.” Small sample sizes and bias, Ioannidis argued, mean that incorrect conclusions often end up in the literature, and those errors are too rarely discovered, because scientists would much rather further their own research agendas than try to replicate the work of colleagues. Since that paper, metascientists have honed their techniques for studying bias, a term that covers everything from so-called “questionable research practices”—failing to publish negative results or applying statistical tests over and over again until you find something interesting, for example—to outright data fabrication or falsification.

They take the pulse of this bias by looking not at individual studies but at overall patterns in the literature. When smaller studies on a particular topic tend to show more dramatic results than larger studies, for example, that can be an indicator of bias. Smaller studies are more variable, so some of them will end up being dramatic by chance—and in a world where dramatic results are favored, those studies will get published more often. Other approaches involve looking at p-values, numbers that indicate whether a given result is statistically significant or not. If, across the literature on a given research question, too many p-values seem significant, and too few are not, then scientists may be using questionable approaches to try to make their results seem more meaningful.

But those patterns don’t indicate how much of that bias is attributable to fraud rather than dishonest data analysis or innocent errors. There’s a sense in which fraud is intrinsically unmeasurable, says Jennifer Byrne, a professor of molecular oncology at the University of Sydney who has worked to identify potentially fraudulent papers in cancer literature. “Fraud is about intent. It’s a psychological state of mind,” she says. “How do you infer a state of mind and intent from a published paper?” 

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

To make matters more complicated, fraud means different things to different people; common scientific practices like omitting outliers from data could, technically speaking, be considered fraud. All of this makes fraud devilishly difficult to measure, so experts often end up disagreeing about how common it actually is—and fraud researchers are an opinionated bunch. Bik speculates that 5 to 10 percent of scientific papers are fraudulent, whereas Daniele Fanelli, a metascientist at the London School of Economics, thinks the true rate could possibly be under 1 percent. To try to get a handle on this frequency, researchers can track retractions, cases in which journals remove a paper because it is irremediably flawed. But very few papers actually meet this fate—as of January 3, the blog Retraction Watch has reported only 3,276 retractions out of the millions of papers published in 2021. Around 40 percent of retractions are due to honest errors or to forms of scientific misconduct that fall short of fraud, like plagiarism.

Because retractions are such an indirect measure of fraud, some researchers go straight to the source and poll scientists. Based on several published surveys, Fanelli has estimated that about 2 percent of scientists have committed fraud during their careers. But in a more recent anonymous survey of scientists in the Netherlands, 8 percent of respondents admitted to committing at least some fraud in the past three years. Even that figure may be low: Perhaps some people didn’t want to admit to scientific misdeeds, even in the safety of an anonymous survey.

But the results aren’t as dire as they might seem. Just because someone has committed fraud once doesn’t mean they always do so. In fact, scientists who admit to questionable research practices report that they engage in them in only a small minority of their research. And because the definition of fraud can be so unclear, some of the researchers who said they committed fraud might have been following common practices—like removing outliers according to accepted metrics.

In the face of this frustrating ambiguity, in 2016 Bik decided to try to figure out the extent of the fraud problem by being as systematic as possible. She and her colleagues combed through a corpus of more than 20,000 papers looking for image duplications. They identified problems in about 4 percent of them. In more than half of those cases, they determined that fraud was likely. But those results only account for image duplication; if Bik had looked for numerical data irregularities, the number of problematic papers she caught would probably have been higher.

The rate of fraud, though, is less consequential than how much of an effect it has on science—and there, experts can’t agree either. Fanelli, who used to focus much of his research on fraud but now spends most of his time on other metascientific questions, thinks there’s not much to worry about. In one study, he found that retracted papers made only a small difference to the conclusions of meta-analyses, studies that try to ascertain the scientific consensus about a particular topic by analyzing large numbers of articles. As long as there’s a substantial body of work on a particular subject, a single paper typically won’t shift that scientific consensus much.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

Van Assen agrees that fraud is not the most important threat to scientific research. “Questionable research practices”—like repeating an experiment until you get a significant result—“are also horrible. And they are much more common. So we shouldn't focus too much on fraud,” he says. In the Dutch survey, about half of researchers admitted to engaging in questionable research practices—six times as many as admitted to fraud.

Others, though, are more worried—Byrne is particularly concerned about paper mills, organizations that generate fake papers en masse and then sell authorships to scientists looking for a career boost. In some small subdisciplines, she says, fraudulent papers outnumber genuine ones. “People will lose faith in the whole process if they know that there’s a lot of potentially fabricated research, and they also know that no one’s doing anything about it,” she says.

As hard as she and her PubPeer compatriots try, Bik is never going to be able to rid the world of scientific fraud. But, to keep science working, she doesn’t necessarily need to. After all, there are countless papers that are totally honest and also totally incorrect: Sometimes researchers make errors, and sometimes what looks like a genuine pattern is just random noise. That’s why replication—redoing a study as accurately as possible to see if you get the same results—is such an essential part of science. Conducting replication studies can mitigate the effects of fraud, even if that fraud is never explicitly identified. “It’s not foolproof or super efficient,” says Adam Marcus, who, together with Ivan Oransky, founded Retraction Watch. But, he continues, “it’s the most effective mechanism we have.”

There are ways to make replication an even more effective tool, Marcus says: Universities could stop rewarding scientists only for publishing lots of high-profile papers and start rewarding them for conducting replication studies. Journals could respond more quickly when evidence indicates the possibility of fraud. And requiring scientists to share their raw data or accepting papers on the basis of their methods rather than their results would make fraud more difficult and less rewarding. As those practices get more popular, Marcus says, science gets more resilient. “Science is supposed to be self-correcting,” Marcus says. “And we’re watching it correct itself in real time.”

Related Articles

Latest Articles