Million-dollar project aims to expose adverse medical research

A new plan by the regulator behind the retrieval of watches is targeting defective or forged medical research for nearly $1 million.
The Center for Science Integrity has just launched the Medical Evidence Project, a two-year effort to identify published medical research that negatively affects health guidelines and ensure people actually hear it.
The project is equipped with $900,000 in open philanthropy and a core team of five investigators, the project will use forensic startup tools to identify problems in scientific articles and report its findings through the Retraction Watch, the most important website for scientific surveillance dogs.
“We initially built a home for the center of scientific integrity to withdraw watches, but we always hope we can do more in the field of research accountability,” Ivan Oransky, executive director of the Center and co-founder of the withdrawal watches, said in a post announcing the grant. “The Medical Evidence Project allows us to support critical analysis and disseminate discoveries.”
According to nature, these flawed and forged documents are annoying because they tend toward meta-analysis, which combine findings from multiple studies to draw more statistical conclusions. If one or two bilayer studies include it in the meta-analysis, it can be targeted at the scale of health policy.
In 2009 alone, European guidelines recommended the use of beta-blockers during non-cardiac surgery, which was based on millennium research that was later questioned. Years later, an independent review showed that the guide could kill 10,000 people in the UK each year.
Led by James Heathers, a scientific integrity consultant, the team’s plan is to build software tools, chase potential clients of anonymous whistleblowers and pay peer reviewers to check their work. Their goal is to identify at least 10 defective meta-analysis each year.
The team is choosing their moment wisely. As Gizmodo previously reported, AI-generated junk science is flooding the academic digital ecosystem, with everything emerging from conference procedures to peer-reviewed journals. A study published in the Harvard Kennedy School’s misinformation review found that two-thirds of the sampled papers retrieved by Google Scholar contain signs of text generated by GPT, even in mainstream scientific channels. About 14.5% of false research focuses on health.
This is especially shocking, as Google Scholar does not distinguish between peer-reviewed research and preprints, student papers, or other less reliant on work. And, once such bystanders are pushed into meta-analysis or clinician citations, it is difficult to unravel the consequences. “If we can’t believe that the research we read is real, we risk making decisions based on misinformation,” one researcher told Gizmodo.
We’ve seen how nonsense slipped. In 2021, Springer Nature withdrew more than 40 papers from its Arab Earth Science Journal – studying people so incoherent that they read like crazy libs generated by AI. Just last year, the publisher’s boundaries had to pull a piece that featured anatomically impossible AI-generated images.
We entered the digital fossil age, where AI models trained in network entangled data began to save and spread nonsense phrases as if they were real scientific terms. For example, earlier this year, a group of researchers discovered a messy set of words, including a 1959 biology paper embedded in the output of large language models including OpenAI’s GPT-4O.
In this climate, the goal of the Medical Evidence Project feels more like classification than cleaning. The team is dealing with a lot of flawed information, hidden within the vision range, many of which can have very real health consequences if they are subject to superficial value.