Brief preface: I had some misgivings about writing this article. It seems to come from a place of indignation, and I would much rather cultivate more positive thoughts and emotions. But the implications regarding the state of modern science, and yes, my indignation, were too much to sway me away from writing. In the end, it’s a story worth sharing, because the implications are worth our consideration, and it’s worth thinking what we might do to improve situations like this. I have done my best to present it in an impartial manner.
I don’t write much about current events, but two stories have come up in the news in the past year that both touch me personally in some small way. The more recent of the two involves the resignation of Stanford University president Marc Tessier-Lavigne. This drew my attention partly because the children of two close friends of mine have expressed interest in going to college at Stanford. The reason for his resignation was the discovery of manipulated and falsified data in multiple scientific papers that he co-authored, all in the area of Alzheimer’s research. Theo Baker at the Stanford Daily has done an excellent job investigating and reporting on this story, including in the following articles:
The manipulated and falsified data was discovered because of the fishy figures. Commonly, bands in western blots were duplicated, even though they purportedly represented different experimental results. If this happened once, it could be the result of a simple mistake. Research scientists, however, typically prepare their submissions for publication to major journals very carefully, making accidental mixups in figures less lightly. Given that there is a pattern of these anomalies across multiple papers, a much more reasonable explanation is that the data was falsified. If so, it’s a rather sloppy form of misdirection, since no two bands on western blots ever look alike, and both copies of the copy-pasted bands are right there in the same paper for anyone looking carefully enough to see.
Here’s a list of some of the papers in question. The PubPeer links give a good sense of just what has been exposed as fabricated or potentially fabricated.
Stein, Elke, and Marc Tessier-Lavigne. "Hierarchical organization of guidance receptors: silencing of netrin attraction by slit through a Robo/DCC receptor complex." Science 291.5510 (2001): 1928-1938.
734 citations
Stein, Elke, Yimin Zou, Mu-ming Poo, and Marc Tessier-Lavigne. "Binding of DCC by netrin-1 to mediate axon guidance independent of adenosine A2B receptor activation." Science 291.5510 (2001): 1976-1982.
312 citations
Serini, Guido, Donatella Valdembri, Sara Zanivan, Giulia Morterra, Constanze Burkhardt, Francesca Caccavari, Luca Zammataro, Luca Primo, Luca Tamagnone, Malcolm Logan, Marc Tessier-Lavigne, Masahiko Taniguchi, Andreas W. Püschel and Federico Bussolino. "Class 3 semaphorins control vascular morphogenesis by inhibiting integrin function." Nature 424.6947 (2003): 391-397.
647 citations
Bechara, Ahmad, Homaira Nawabi, Frédéric Moret, Avraham Yaron, Eli Weaver, Muriel Bozon, Karima Abouzid, Jun-Lin Guan, Marc Tessier-Lavigne, Vance Lemmon, and Valérie Castellani. "FAK–MAPK‐dependent adhesion disassembly downstream of L1 contributes to semaphorin3A‐induced collapse." The EMBO journal 27.11 (2008): 1549-1562.
136 citations
Nikolaev, Anatoly, Todd McLaughlin, Dennis DM O’Leary, and Marc Tessier-Lavigne. "APP binds DR6 to trigger axon pruning and neuron death via distinct caspases." Nature 457.7232 (2009): 981-989.
1271 citations
The citation counts come from Google Scholar, as of July 2023. The number of citations a paper has is a common metric used as an indicator of how influential a paper is. The average number of citations for a scientific paper is roughly 10, indicating that the papers in question are highly influential papers. This means that they have a strong influence on the direction of future research. Millions of research dollars, and many person-years of effort, are misdirected from other avenues of research, and into research following up on questionable leads. Of the five papers I listed above, two are from Nature, and two from Science. These are probably the two most prestigious scientific journals in print.
For a small taste of the influence these dubious studies have had, check out this Nature article, Alzheimer's theory makes a splash. It gushes about the 1271-citation paper listed above:
The Alzheimer's research community is buzzing about a theory suggesting that a close relative of the beta-amyloid protein, and not necessarily beta-amyloid itself — the long-standing suspect — may be a major culprit in the disease.
Theo Baker also reports on the excitement generated from this research:
Two days after the 2009 paper came out, Genentech’s1 letter to shareholders called it “groundbreaking basic research about an entirely new way of looking at the cause of Alzheimer’s disease.” At the time, Paul Greengard, a Nobel Laureate, called it “a very exciting paper,” saying “it’s going to have a major impact on the Alzheimer’s field.” And the news wing of Nature published an article entitled “Alzheimer’s theory makes a splash.”
People within the company were similarly excited, at first. One of the senior scientists recalled that Tessier-Lavigne’s initial presentation of the research left the room stunned. “This came out of nowhere,” said the scientist. Another senior executive in the room said, “We all thought: holy shit. This is Nobel Prize stuff…It was the miracle result.”
Apparently, the older theories they had on the causes of Alzheimer’s failed to turn up any effective medications for the disease:
“We have yet to get a disease-modifying drug that works. So we're missing something, and maybe this is one of the missing pieces," says Donna Wilcock, a neurologist at Duke University in Durham, North Carolina.
Of course, people started getting suspicious after the follow-up research based on the results of this work not only failed to produce any effective new drugs, but failed to corroborate the original results.
This is not a good look. Not on Marc Tessier-Lavigne, nor on any of the authors on these papers, nor on Science or Nature, nor on Stanford University. Indeed, it is a bad look on modern-day science itself. This is at a time when, by some estimates, more than half of scientific studies are not reproducible. There are many reasons for this, but it does make you wonder just how widespread fraudulent scientific studies are. And this is not an isolated case. Theo Baker mentions a few other examples in passing in one of his articles:
Silvia Bulfone-Paus, a prominent German researcher, was forced to step down as the director of the Borstel Institute in 2011 after image manipulation was found in several of her papers (Bulfone-Paus blamed two of her post-doc researchers). Carlo Croce, an Ohio State University professor, was beset with similar allegations in 2017 — an official review conducted by the university found earlier this year that he had not manipulated imagery himself, but the professor was disciplined over “management problems,” and two of his researchers, who were determined to have made the falsifications, were dismissed. And Gregg Semenza, a Nobel-prize-winning scientist, retracted 17 papers after allegations were made on PubPeer.
When I read about Tessier-Lavigne’s resignation as president of Stanford University, it brought to mind a story I had heard a few months back, regarding Sylvain Lesné, a professor at the University of Minnesota. Living in the suburbs of Minneapolis, I know a few people who work at the University, and many people who graduated from there. The similarities between these two stories are remarkable. Everything in the following paragraph applies to both:
Our main character is a scientific researcher at a distinguished university, studying Alzheimer’s disease2. They publish a paper in the journal Nature — perhaps the most prestigious scientific journal there is — purporting to demonstrate a ground-breaking discovery in the biological processes of the disease. The paper generates excitement in the field, and produces an abundance of citations and follow-up research. But the excitement wears off as the follow-up research fails to produce results. This leads people to step back and take a closer look at the paper, and they discover that some of the data presented therein looks suspicious. Bands from western blots that purportedly come from samples under different conditions turn out to be identical. An impossible result. Even two blots run on the same sample will not produce identical bands. Other works from the authors of the paper are further scrutinized, and more questionable figures are found.
Here’s the breakthrough article from UMN, where Lesné is first author:
Lesné, Sylvain, Ming Teng Koh, Linda Kotilinek, Rakez Kayed, Charles G. Glabe, Austin Yang, Michela Gallagher, and Karen H. Ashe. "A specific amyloid-β protein assembly in the brain impairs memory." Nature 440.7082 (2006): 352-357.
3383 citations
This paper made a big splash, as Charles Piller3, and investigative reporter for Science magazine, reports:
Ashe touted Aβ*56 on her website as “the first substance ever identified in brain tissue in Alzheimer’s research that has been shown to cause memory impairment.” An accompanying editorial in Nature called Aβ*56 “a star suspect” in Alzheimer’s. Alzforum, a widely read online hub for the field, titled its coverage, “Aβ Star is Born?” Less than 2 weeks after the paper was published, Ashe won the prestigious Potamkin Prize for neuroscience, partly for work leading to Aβ*56.
Fabricated data was discovered and reported in Lesné’s papers by Matthew Schrag, himself a research scientist studying Alzheimer’s disease, Elisabeth Bik, who seems to be commenting on PubPeer multiple times a day, and other members of the scientific community who mostly remain anonymous. Along with the Nature paper referenced above, they found questionable figures in the following three papers produced by the same lab:
Chiang, Angie CA, Stephanie W. Fowler, Rohit Reddy, Olga Pletnikova, Juan C. Troncoso, Mathew A. Sherman, Sylvain E. Lesne, and Joanna L. Jankowsky. "Discrete pools of oligomeric amyloid-β track with spatial learning deficits in a mouse model of Alzheimer amyloidosis." The American Journal of Pathology 188.3 (2018): 739-756.
17 citations
Fowler, Stephanie W., Angie CA Chiang, Ricky R. Savjani, Megan E. Larson, Mathew A. Sherman, Dorothy R. Schuler, John R. Cirrito, Sylvain E. Lesné, and Joanna L. Jankowsky. "Genetic modulation of soluble Aβ rescues cognitive and synaptic impairment in a mouse model of Alzheimer's disease." Journal of Neuroscience 34.23 (2014): 7871-7885.
83 citations
Larson, Megan, Mathew A. Sherman, Fatou Amar, Mario Nuvolone, Julie A. Schneider, David A. Bennett, Adriano Aguzzi, and Sylvain E. Lesné. "The complex PrPc-Fyn couples human oligomeric Aβ with pathological tau changes in Alzheimer's disease." Journal of Neuroscience 32.47 (2012): 16857-16871.
313 citations
As in the case of Tessier-Lavigne, these fabricated results led to a massive waste in resources. Charles Piller reports:
“The immediate, obvious damage is wasted NIH funding and wasted thinking in the field because people are using these results as a starting point for their own experiments,” says Stanford University neuroscientist Thomas Südhof, a Nobel laureate and expert on Alzheimer’s and related conditions.
Of course, the research didn’t pan out:
Hundreds of clinical trials of amyloid-targeted therapies have yielded few glimmers of promise, however; only the underwhelming Aduhelm has gained FDA approval. Yet Aβ still dominates research and drug development. NIH spent about $1.6 billion on projects that mention amyloids in this fiscal year, about half its overall Alzheimer’s funding. Scientists who advance other potential Alzheimer’s causes, such as immune dysfunction or inflammation, complain they have been sidelined by the “amyloid mafia.” Forsayeth says the amyloid hypothesis became “the scientific equivalent of the Ptolemaic model of the Solar System,” in which the Sun and planets rotate around Earth.
In a strange tit-for-tat reaction, Schrag’s own research came under scrutiny when he went public with his concerns. Schrag anticipated this, but was confident that the research he had done was sound, and that he would come up clean. To his upset and surprise, three papers that he co-authored while working as an undergraduate at the lab of Othman Ghribi, turned up with similarly problematic figures, including, once again, copy-pasted bands from Western blots. These three papers are called into question:
Ghribi, Othman, Mikhail Y. Golovko, Brian Larsen, Matthew Schrag, and Eric J. Murphy. "Retracted: Deposition of iron and β‐amyloid plaques is associated with cortical cellular damage in rabbits fed with long‐term cholesterol‐enriched diets." Journal of Neurochemistry 99.2 (2006): 438-449.
149 citations
Schrag, Matthew, Sunita Sharma, Holly Brown‐Borg, and Othman Ghribi. "Hippocampus of Ames dwarf mice is resistant to β‐amyloid‐induced tau hyperphosphorylation and changes in apoptosis‐regulatory protein levels." Hippocampus 18.3 (2008): 239-244.
43 citations
Ghribi, Othman, Brian Larsen, Matthew Schrag, and Mary M. Herman. "High cholesterol content in neurons increases BACE, β-amyloid, and phosphorylated tau levels in rabbit hippocampus." Experimental neurology 200.2 (2006): 460-467.
198 citations
Shrag reported that he was able to get an admission of responsibility from Ghribi:
According to Schrag, in a phone conversation the senior scientist emotionally acknowledged “problems” in many of his papers, including those two co-authored with Schrag, and accepted responsibility. During their discussion, which Schrag recounted to Science, Ghribi maintained to his former student that the underlying findings were correct but admitted to exaggerating data. “I’m nauseated talking about it,” Schrag says.
And just for fun, here are a couple papers from yet another Alzheimer’s research lab with figures that have been called into question:
Sripetchwandee, Jirapas, Juthamas Khamseekaew, Saovaros Svasti, Somdet Srichairatanakool, Suthat Fucharoen, Nipon Chattipakorn, and Siriporn C. Chattipakorn. "Deferiprone and efonidipine mitigated iron-overload induced neurotoxicity in wild-type and thalassemic mice." Life sciences 239 (2019): 116878.
11 citations
Ongnok, Benjamin, Thawatchai Khuanjing, Titikorn Chunchai, Patcharapong Pantiya, Sasiwan Kerdphoo, Busarin Arunsak, Wichwara Nawara, Thidarat Jaiwongkam, Nattayaporn Apaijai, Nipon Chattipakorn and Siriporn C. Chattipakorn. "Donepezil protects against doxorubicin-induced chemobrain in rats via attenuation of inflammation and oxidative stress without interfering with doxorubicin efficacy." Neurotherapeutics 18 (2021): 2107-2125.
23 citations
So what’s going on here? Do these papers contain manipulated data? Were the manipulations intentional? Do they invalidate the conclusions reached by these papers? I would like to say that you should come to your own conclusions, but unfortunately, reading through papers like these, and commentaries on them, is probably pretty difficult and time-consuming for someone who is not a biologist. It’s not impossible, but pretty difficult.4 Because most of us don’t have the time or energy to do this, we tend to fall back on a “trust the experts” mentality. But the experts may or may not be reliable and trustworthy. Many people set the question of the trustworthiness of the experts to the side, because not being able to trust them often leaves people with feelings of helplessness. But it’s really not that hopeless. It’s just another area where we don’t have complete and total knowledge. Situations like this are part of our regular experience in life, and can easily be approached from a position of joy, hope, and curiosity.
I’m fortunate to have been able to work for over a decade in a supporting role in research labs like the ones that produced these papers. I was very interested in the science involved, and taught myself a lot about it. I paid attention to what was going on around me in the lab, and read a lot of papers like the ones I have cited above. I have carefully reviewed all the comments and concerns about these papers expressed on PubPeer in the links above. I’ll provide my opinions here.
Every one of the 14 papers I’ve cited above contain image manipulations that are, from a purist point of view, antithetical to the honest sharing of unadulterated data to support your conclusions. In a handful of cases, some image manipulations seem simply for the purpose of making the paper look pretty. For instance, some background noise might be added along the horizontal edges of a figure to make it the same size as adjacent figures, so that they all line up in a nice grid. I don’t personally like this practice, because I would rather look at unadulterated data, and forsake the neat and tidy layout. But all told, these kinds of image manipulations are relatively benign.
Almost all of these papers contain images, or crops of images, that were copied from other images presented in the same paper, yet purport to represent different experimental conditions.5 In a small number of cases, these copies might be explained away as a mistake somewhere in the process of getting the paper to print. But in most cases, there are signs that the copies were not mistakes. For instance, some images are not just copied, but spliced into a separate image. In other words, somebody spent a lot of time in an image editing tool such as Photoshop to construct the figure from various parts. Often times, there were attempts to disguise these edits, such as by adjusting the contrast so that the spliced parts look more contiguous. In many cases, it appears that attempts were made to disguise the fact that the images were copied, for example by inverting the image, or adjusting the saturation.
In some cases, edits are made to add or remove elements to or from an image. For instance, some bands are entirely removed from western blots — hence fabricating the absence of a certain protein — using an image editor. This kind of image editing often leaves behind an evidence trail, which commenters on PubPeer are able to discover and expose.
In some cases, commenters on PubPeer claim that manipulations were made to validate, or strengthen, the conclusions of the paper. This seems reasonable enough. After all, what reasons would a scientific researcher have for altering their data before sharing it? To make the presentation look more pretty is one, more benign, reason. To make the evidence more strongly support their claims is another. It’s hard to imagine other reasons. Perhaps they lost some of their data, and can’t afford to rerun the experiment, and are trying to reproduce it from memory using Photoshop?
In my opinion, the data has been manipulated in all of these papers. In most cases, there is clear evidence that the manipulations were intentional. And in most cases, these data manipulations were made to strengthen the claims and conclusions made by the paper.
These 14 papers are not isolated island chains peeking out of an ocean of integrity. One meta-analysis concluded that roughly 2% of scientists self-report as having falsified data. Retraction Watch reports multiple times a week on scientific papers that come under scrutiny. Even the badly biased Wikipedia maintains a list of scientific misconduct incidents that includes over 50 entries for biology and the biomedical sciences.
Of course, we only learn about these fabrications when they are exposed. I was struck by a certain carelessness on the part of the fabricators when looking over these figures. Many papers were exposed due to evidence of image manipulations, such as splicing multiple images together into a single image. I would think it is pretty common knowledge that these kinds of image manipulations are pretty easy to spot, if someone is looking for them. Someone a bit more careful might avoid risk of exposure by not using image splicing techniques at all. Many papers were exposed because they used two copies of the same image to represent different experimental conditions. But many such images are produced during the course of research that do not make it into the publication. One could easily avoid exposure by only misusing images that are not used elsewhere in the paper.
Reading through those PubPeer reports, there were a few moments — when looking at some particularly egregious examples of fabricated data — where feelings of outrage and disgust welled up within me. Of course, it is beneficial to act with integrity in whatever you do. But integrity in science is especially important. If a scientist cannot read other scientists’ research, and have some degree of confidence that they are reading an honest account, scientific progress as a whole is slowed. It’s also important that the general population have a strong degree of confidence in the scientific establishment. Otherwise, they will seek the truth from other sources. Some reliable, some not. But without question, the overall depth of understanding suffers.
And given that many scientific endeavors are largely funded by the general population through taxes, mistrust in the scientific establishment leads to disaffection with, and mistrust of, the government.
Obviously, we would prefer that this kind of misdirection did not occur. How did this happen? What can we do about it?
Research scientists are under a lot of pressure to publish, and to publish in the most prestigious journals they can. If they don’t do a good job of this, they have a harder time getting tenure, and they have a harder time getting better grant funding. Less funding leads to less capacity to produce publishable results, in a positive feedback loop. (Not so positive for them, of course.)
What we may be witnessing is a sort of race to the bottom6: There is intense pressure and competition to publish in the best journals; Scientists can increase their publication prospects by embellishing their results; Scientists who retain strict integrity in turn have lower prospects of getting published; Those scientists get less funding, and have lower capacity to continue to perform scientific research; A larger proportion of the research gets done by scientists willing to embellish; Because a larger proportion of the scientists are willing to embellish, the competitive advantage gained through their fabrications is diminished; And yet, the whole endeavor is now worse off than before, because the research journals are polluted by more and more dishonest results.
One obvious way to try to prevent this kind of misdirection is by disincentivizing it with the threat and imposition of negative consequences. Yet to my knowledge, not one of the authors of these 14 papers have been subjected to any kind of censure. Tessier-Lavigne was pressured to step down as president of Stanford University, but that seems to be due to negative publicity, largely thanks to the reporting of Stanford Daily reporter Theo Baker. It does not seem to be driven by the scientific community. Importantly, Tessier-Lavigne is staying on as a professor at Stanford, and continuing to run his Alzheimer’s research lab.
Another kind of discipline also corrects the scientific record: retracting the problematic papers. Out of the 14 papers discussed here, as of August 2023, only one has been retracted. Four have had corrections issued. Two appear with content warnings, or “editorial expressions of concern.” Three have ongoing investigations by the publishing journal, one of which has been ongoing for well over a year. Four of the papers have had no action taken.
We seem to have lost a sense of outrage for this kind of violation of scientific integrity. I remember a similar case of MIT professor Luk Van Parijs, back in 2005. He lost his job, had multiple papers retracted, and had to pay back grant funding. He also faced a six-month jail sentence in a federal court case, which was reduced to six months home detention. Back then, this was a huge deal in the news. There was outrage back then. For instance, an article in the New Scientist7 from 2005 starts off:
Of all the issues that New Scientist covers, the most disturbing is bad science. At a time when scientists are fighting as never before for public support against political and religious manipulation, it is demoralising to discover that science is being undermined from within.
They also editorialize, “The toughest issue raised by this case is how to stop similar happenings in future.” I guess we didn’t really make much progress there. Perhaps we have even stopped asking such uncomfortable questions.8
Nowadays, the response of the scientific establishment seems to be to close ranks. That is, they are valuing the appearance of scientific integrity over actual scientific integrity. Firing a fraudulent professor will bring a lot of attention, which might cause a story on data falsification to gain more attention, rather than fading away. Retracting an image from a paper will get a lot less notice than retracting an entire paper. If a scientific journal is going to investigate claims of impropriety, it is better if the investigation drags on as long as possible, so that the final results come out after interest in the story has waned. In each of these examples, actual scientific integrity suffers.
In the case of Lesné, part of the strategy for minimizing the impact on the image of science involves downplaying the importance of his questionable research. Given that so many people bring up all the wasted research funding, this is smart. And of course, it’s best to try to downplay the significance of scientific misconduct as a general phenomenon as well.
For example, Stuart Layt editorializes in the Sunday Morning Herald regarding the Lesné case, saying, “The initial uneasy consensus, however, appears to be that while unfortunate, the apparent falsification of data in the earlier paper does not invalidate the research that has followed.” He is not exactly clear why this “apparent falsification of data” is “unfortunate”. If he is talking about the breach of scientific integrity, perhaps a word like “scandalous” would be more appropriate. But perhaps he feels it is unfortunate precisely because it became a scandal?
Or consider this editorial from the Alzheimer’s Society, which claims that “Allegations of this kind in research are taken extremely seriously in the research community but are thankfully very rare,” playing down the overall impact of scientific dishonesty. (They make no effort to justify their claim that these kinds of allegations are “very rare”.) They also play down the importance of Lesné’s research:
This study represents a small area of amyloid research and an even smaller area of dementia research. Although these allegations are a concern, there is a huge amount of credible research evidence behind the amyloid protein’s role in the diseases which cause dementia.
Consider also this statement from the National Institute on Aging (NIA), which makes specific mention of peptide Aβ*56, the subject of Lesné’s dubious papers. They basically say that although followup research into Aβ*56 did not bear any fruit, there are still valuable avenues of exploration within the more general class of Amyloid beta (Aβ) peptides. Because this statement specifically addresses Aβ*56, and because it was published within a year of Schrag going public with his findings, (government bureaucracies are slow movers), this is clearly a response to the scandal involving Lesné’s research. As in the previous examples, it is attempting to minimize the importance of the scandal.
But the statement seems to bend over backwards to avoid specific mention of Lesné, the scandal, or scientific improprieties in general. Someone who came across this statement without proper context would be left puzzling over why the statement was even issued. The NIA has avoided drawing any extra attention to the scandal in a very awkward way. They might also be seeking to avoid embarrassment from the fact that the National Institutes of Health (NIH) — the parent agency of the NIA — provided funding for all four of the Lesné papers we’ve looked at. The NIA itself provided funding for one of these papers as well.
Theo Baker reports that during the Stanford University investigation into Tessier-Lavigne, the anonymity of witnesses was not guaranteed:
One person who spoke to the investigators said they asked for their identity not to be disclosed publicly or revealed to Tessier-Lavigne, telling investigators “the consequences of [recounting these events] could be enormous.” The witness said they were told investigators “couldn’t guarantee that.” As a result, they told The Daily that while they did participate in an interview, they withheld details from the investigators out of fear of retribution.
Another witness refused to participate in the investigation at all without anonimity.
There are two telling aspects to this story. First, the very fact that witnesses feared retribution simply for telling the truth as they see it. Without question, there should be mechanisms in place within the scientific community to prevent retribution of any kind.
The other tell is the simple fact that these witnesses were denied guarantees of anonymity. An investigation like this — if it is indeed seeking the truth — should do everything it can to prevent any kind of retribution against the witnesses. Providing anonymity to witnesses is a standard approach to handling these kinds of concerns. But if the investigation was more interested in whitewashing the whole affair, then the approach they took would be best.
Consider the fact that Schrag expected — correctly — that his own research would come under scrutiny after exposing data manipulation in Lesné’s research. Why would this be? It feels like a form of retribution, but retribution for what? For making efforts in support of scientific integrity? Or for making the scientific community, and specifically the Alzheimer’s research community, look bad?
Just look at what a pretty picture gets presented in this glossy progress report from the NIH on dementia research. In every photo, people either have their biggest smiles on, or they have an expression of engaged curiosity that one might expect from some simulacrum of an ideal scientist.
The entire text of the report trumpets just how much we’ve already accomplished in battling this disease — and how much more we could do with just a little more money. From the introduction, we learn how important the NIH is to the cause:
NIH drives the nation’s research to better understand the complex causes of Alzheimer’s and related dementias, identify early signs of disease, develop effective interventions to prevent or delay disease progression, and improve care and support for those living with dementia as well as their caregivers.
Two paragraphs later, we learn that it is not just generous funding that makes this possible, but generous funding increases:
Thanks to generous federal funding increases, Alzheimer’s and related dementias research has advanced at a remarkable pace.
Given that the budget for the previous fiscal year was $3.9 billion, and they are asking for $321 million more than that for the next funding cycle, it makes sense to portray things in such a Barbie-like state of perfection. Not one of the directors or chiefs at the NIH or NIA want any black eyes from scandals like those of Lesné or Tessier-Lavigne. They want continued funding and they want increased funding. If that means burying a scandal on data manipulation at the expense of actual scientific integrity, then if at all possible, they will bury the scandal.
This case study showcases one example of the level of corruption and decay that modern science has fallen to. It is no longer possible to cite any scientific research with any degree of authority. Every scientific paper has to be examined individually, on the basis of both methodology and conclusions, and carefully scrutinized for signs of dishonesty. It is beyond the reach of most people — even if just from the perspective of time commitment — to apply this kind of scrutiny, leaving them a choice between trusting the experts, and taking up a position of doubt and agnosticism regarding scientific results. And given the proclivity of the scientific establishment to sacrifice scientific integrity for the sake of appearances, trusting the experts becomes the less attractive option.
Thanks for bearing with me! I’m sure it’s been a long and difficult read. Kudos to you for finishing it!
Posts like this take a great deal of thought and care, and many, many hours of research. If you want to support my work, the best things you can do are share and subscribe. Thank you!
At the time the paper was published, Tessier-Lavigne was the senior vice president of Research Drug Discovery at Genentech.
It’s worth mentioning that “studying Alzheimer’s” here means doing early-stage research into potential drug targets. As an illustrative example, a scientist might find a correlation between the presence of a certain protein and the development of a certain condition or disease. Moving forward, researchers would embark on the process of looking for chemical compounds suitable for production as an allopathic medicine that interfere with the bioactivity of this protein.
Science will allow you to view three articles before blocking you with a paywall. If this happens to you, and you are unable to pay, you might try opening the articles in private browsing mode to get around it.
My approach to understanding scientific papers in an unfamiliar field is roughly as follows: Read the paper once, slowly, looking up any terms or concepts that you don’t understand. Then, read the paper again, slowly, continuing to do background research on unfamiliar terms and concepts. Pay attention to the citations, select two or three that seem important, and read those papers in a similar manner. Then, read the original paper one more time.
Yes, it is time-consuming! But you will be surprised at how much better your level of comprehension is. Reading further papers in the field will become easier, and you can adopt a more streamlined approach. But I recommend you continue to follow up on a couple of the more important citations.
In one or two cases, the images were copied from an earlier published paper from the same lab.
For a brilliant and extensive analysis of races to the bottom, I highly recommend the essay Meditations on Moloch in the Slate Star Codex blog. You can also find it in audio format.
If you need to circumvent a paywall, it’s always worth checking at the Wayback Machine.
Another expression of outrage over the Luk Van Parijs case comes from two thoughtful 2005 articles by philosopher Janet D. Stemwedel, who opines, “It goes without saying that a scientist ought not to fabricate or falsify data. Fabrication and falsification suck. These deeds are varieties of deception. Deceiving the people reading your papers in the journals, or the people reading your grant applications and deciding whether to fund your research, is crappy. And, fabrication and falsification suck even more when done in papers on which you have coauthors. You're dragging good scientists down with you. Even when you've been taken out of the game, they still have to worry about corrections, retractions, and the lasting impact on their reputations.”
After I had finalized this essay, but before publishing, this story came out:
https://retractionwatch.com/2023/08/31/stanford-president-retracts-two-science-papers-following-investigation/
Marc Tessier-Lavigne just retracted two papers from Science. We looked at both of these papers in this essay.
Impressively detailed study here, John. This topic is incredibly important for those of us who hold the scientific method in high regard. In the social sciences, the situation may be even worse, see Alvaro de Menard's piece, which I advise as a complimentary essay to your own.
https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/
One additional thought. You speak of disincentivizing the behavior -- the "stick". What about the "carrot"? There are reasons why self-described scientists are behaving in this way, deceiving themselves, their community, and the public. Inquiring and addressing these motivations would be important steps to take. This is far bigger than the scientific community, IMO. In the fields in which I have specific knowledge -- the arts -- motivations are so upside-down, that collaboration and teaching have become impossibly difficult.
If we do not address the role and attainment of social status in our culture, the hiring and tenuring mechanisms in academia, the wholesale replacement of the meaning of success across fields of expertise, and indeed, the roles of expertise and excellence in contemporary Western society, I fear the kind of changes we seek will not come to pass anytime soon.