Senator Bill Cassidy of Louisiana, a prominent member of the committee that regulates NIH, recently issued a long and thoughtful request for ideas as to NIH reform. I just submitted the following:
***
The Good Science Project is pleased to submit these comments on NIH reform, as requested by Ranking Member Sen. Cassidy on the Senate Health, Education, Labor, and Pensions Committee.
The NIH holds a preeminent place as the top biomedical funder in the world, with many lifesaving discoveries and Nobel Prizes to its credit. But the agency has suffered some blows over its performance during Covid, and has been without a permanent director for nearly two years. Moreover, NIH has not been the subject of major reauthorization legislation since the NIH Reform Act of 2006.
Now is therefore the perfect time to think about NIH reform, so as to set the stage for future legislation dealing with systemic problems plaguing not just NIH but the overall biomedical workforce and ecosystem. We applaud Sen. Cassidy’s efforts in this regard.
Peer Review at NIH
Numerous scholars and observers have attested that NIH-style peer review tends to reward conformist, incremental, and marginal research. Indeed, the Good Science Project has catalogued nearly 20 times that the NIH refused to fund groundbreaking or even Nobel-winning research. See “Why Science Funders Should Try To Learn From Past Experience,” Good Science Project (Mar. 27, 2023).
The Good Science Project has a few suggestions here.
First, if we want to fix the problems with peer review, NIH should be required to fund a team of independent scholars (such as the Meta-Research Innovation Center at Stanford, or the National Academies) to do a systematic review of what Michael Nielsen calls an “anti-portfolio,” that is, a comprehensive list of notable scientific projects/people whose grant proposals were rejected by NIH.
Then, as much as feasible, construct a comparison group of grants that were considered around the same time. If possible, acquire the original proposal materials and peer review comments/ratings for both the missed opportunities and for the comparison group of similar grants that got funded (or not).
Next, the research team should assess whether there are any patterns in the missed opportunities vs. the comparison group—i.e., can we explain, at a level beyond random noise, why opportunities were missed? Were there any key predictors available at the time that could have been recognized by funders? Or was it just the luck of the draw? Either finding would be useful.
Finally, the research team should draw policy conclusions from where and when science funders missed out on funding the early stages of great scientific work. For example, what experiments with peer review, solicitations, new types of grants or programs, etc., should a funding agency try out so as to increase the likelihood of funding groundbreaking work as part of a broad portfolio? How can a funding agency identify the programs, program managers, and peer reviewers that are performing well, and scale up what works?
NSF should be encouraged to participate in the study as well. Some have argued that NSF is better able to fund “high risk” research—with its merit review that doesn’t look at the investigator’s pedigree, the flexibility given to program officers, the use of rotators from the field, and the organizational culture. It would be enormously valuable to have more empirical evidence on those questions.
Second, NIH should join NSF in actively experimenting with different peer review methods. For example, NSF has said that they want to pilot a funding model in which a grant can be funded if even one reviewer says yes. This has been called the “golden ticket” model, in that reviewers are all given a metaphorical “golden ticket” that they can play at the right time. The idea here is that groundbreaking ideas often have naysayers at the outset, and we should give more research ideas a chance if there is at least one person who says, “This might fail, but it would be amazing if it worked!”
To be sure, NIH might need to be given more statutory flexibility as to peer review, since 42 U.S.C. § 289a-1(a)(2) says that grants can’t be funded by NIH without a recommendation for approval by a “majority of the members” of the peer review panel, plus a majority of the IC’s advisory council. But even under existing law, there might be ways to handle this issue—perhaps the peer review panel and the advisory council could all agree in advance to give support to a research proposal if one initial reviewer was enthusiastic enough to play their one golden ticket. Or perhaps NIH could fund such grants via Other Transactions Authority.
Third, Congress could consider giving NIH more authority (akin to DARPA or NSF) to fund research proposals based on an expert program officer’s judgment, rather than hinging everything on majority approval. True scientific breakthroughs almost by definition do not come from ideas that are already seen as common wisdom amongst a majority of a given field.
NIH Workforce
Sen. Cassidy asks whether Congress and NIH could “incentivize investigators and key staff to routinely move between the intramural research program, academia, and industry, rather than remaining at NIH for the majority of their careers.”
There is ample precedent for this concern at other science funding agencies. For one, at the National Science Foundation, around half of the program officers are actually called “rotators,” because they rotate in from academia and then back out after a few years. One potential downside of that approach might be that some program officers might be tempted to make funding recommendations with an eye towards their future employment, rather than out of an independent judgment of merit. That said, the upside is bringing in a steady flow of new people and new ideas. DARPA accomplishes that same goal (diversity of people and ideas) by imposing a 5-year term limit on program managers. Thus, there is no possibility of running a program at DARPA forever.
This question therefore overlaps with another of Sen. Cassidy’s questions: should we impose term limits on NIH Institute/Center Directors? Probably so, and we shouldn’t stop there. We should think about imposing term limits on program officers, on scientific review officers (SROs), on study section members at the Center for Scientific Review, and even on entire study sections themselves (an effectively permanent study section can put a stranglehold on any new ideas in that area of research).
All of that said, term limits at NIH would be a new development. We could impose limits on directors today without that much upheaval, but for other personnel, it might be wise to start slowly with pilot experiments and the like. For example, we could engage in a randomized study to impose term limits on some study sections and SROs but not others, and then examine the results after a few years.
NIH Structure
Several of Sen. Cassidy’s questions are appropriately aimed at macro questions, such as NIH’s organ- and disease-based structure, or its overall strategic goals, or how to allocate dollars across the overall portfolio.
These are long overdue questions for Congress to ask. Rather than try to get into the weeds of any particular recommendation(s), we would simply point to the mostly unfulfilled promise of the Scientific Management Review Board. That Board was created in 2006 as a way of handling all of the above issues, and more.
The Board’s overall mandate was to review the “research portfolio” of NIH in order to determine its “progress and effectiveness,” including the “allocation” across different activities; determine the “pending scientific opportunities, and public health needs,” with respect to NIH research; and analyze any potential impacts from reorganizing NIH.
By law, the Board is supposed to report to Congress and HHS at least once every seven years on how to exercise their existing authority to 1) establish new Institutes (which can happen with the HHS Secretary’s recommendation unless Congress rejects it); 2) reorganize any existing Institute or even abolish it (again, this happens with the HHS Secretary’s recommendation unless Congress rejects it), 3) reorganize the Director’s Office, or 4) reorganize particular Institute-level activities. Perhaps most powerfully, the Board’s recommendations would often go into law automatically, unless the NIH Director and/or Congress objected in some circumstances.
At least in theory, this Board could have been extremely effective at regularly rethinking how we organize biomedical funding in the US. But for some reason, this Board’s full advisory authority has never been used. It has never issued an NIH-wide report on the full spectrum of issues described above (most of its reports have been only on procedural issues and/or narrow issues pertaining to one area of research or one NIH center).
Most notably, the Board did issue a report recommending that the National Institute on Alcoholism and Alcohol Abuse be combined with the National Institute on Drug Abuse. But that report was vetoed by Francis Collins in a statement described at the time as “surprising.”
Someone who was deeply involved with that process wrote to me:
"What a *&^% waste of time and emotion for many serious people--who with deliberative seriousness made the right recommendation (in my view). I was pretty disgusted and I can only imagine how alienating it was to the committee. What Francis did instead was to use his 'political capital' to kill NCRR, which was functioning just fine, in order to start NCATS--which does not seem to have been worth the effort or tradeoff."
The Board has done nothing at all since 2015, in direct violation of 42 U.S.C.§ 281(e) (which requires a major report at least once every seven years). Indeed, certain members of the Board were surprised when I reached out to them—they didn’t remember having been on the Board in the first place! One Board member (Nancy Andrews) blamed Francis Collins for the inactivity: As she told STAT, "I had the sense that we were asking questions in areas that they didn’t really want to get into, and I suppose Francis in particular didn’t really want us working on."
In any event, the NIH Reform Act of 2006 put a good idea in place. As Elias Zerhouni (the NIH Director at the time) told me, NIH needed a process to regularly assess how its overall portfolio is doing, and whether to reorganize so as to better achieve its objectives—just like any major institution or corporation. That makes perfect sense. It’s not to say the process will always work, but we won’t know until it is fully used for the first time.
That said, Congress could make some modest changes to the Board’s structure to make it more effective. For example, some folks at NIH have asserted that the Board is itself too bureaucratic in requiring the Board to meet at least five times with regard to any report on organizational issues (42 U.S.C. § 281(e)(5)(A)), and the like. Congress could easily streamline and reduce any such micromanagement in a new NIH reauthorization. There may be other ways to ensure that the Board is more structurally independent from HHS and especially from the NIH Director, so that the Board’s recommendations aren’t seen as originating from status quo authorities who have an incentive to keep their budgets and organizations the same.
NIH Audits
Sen. Cassidy poses several questions about audits, including NIH’s current practices, whether to increase audits, and whether to expand the Office of the Inspector General. There is an opportunity here to kill two birds with one stone—replace a great deal of typical bureaucracy with random audits instead.
First, official surveys of thousands of federally-funded scientists show that they spend upwards of 40 percent of their research time on administration, compliance, bureaucracy, etc. This is why scientists regularly say things like this on Twitter:
To be clear, some senior scientists are able to afford enough administrative staff that they are less affected. But even some senior scientists say the burden is way too high. As Nobel Laureate Thomas Südhof of Stanford told me, “The biggest problem is that administrators who do only administration simply do not understand how much the bureaucratic work takes away from the science. They seem to think that just filling out another form or writing another report is nothing, but if you have to do hundreds of these, that is all you have time to do.”
At the same time as we make thousands of honest scientists spend endless time on compliance and reports, we have another problem: Scientists who commit academic fraud. In the past year or so, we’ve learned that three significant lines of Alzheimer’s research (often funded by NIH) were likely fraudulent (see here, here, and here), and that there were several instances of plagiarism and “data falsification” in work from a cancer lab that had received over $100 million in federal funding over the years. There have been far too many cases like this over the years, and what’s worse, the cases that we know of are probably just the tip of the iceberg (there are lots of ways to fake your data or analysis in ways that would be less obvious than the cases that do get caught).
Imagine a world with no police or prosecutors. That basically describes academic research. No one proactively looks for fraud, bogus data, inaccurate analysis, etc., except for a handful of dedicated folks. There are “offices of research integrity” at the federal and university level, but they aren’t proactive at all, and are far too slow to act even in cases where the evidence strikes everyone else as obvious. In the Alzheimer’s cases above, the fraud went undetected for over a decade, and none of the NIH grant reports and compliance activities caught the fraud at all.
Perhaps ironically, a lot of the bureaucratic burden consists of making researchers file progress reports on what they did — reports that are then basically taken at face value or else ignored. As one researcher at MIT told me, NIH reports always feel to him like regurgitating his articles, and he wishes he could save time by just telling the NIH program officer a list of articles to read. And as another top researcher told me, he has filed many NIH progress reports over the years, but has never heard any feedback on any of them—never a “good job on X,” or “can you explain more about Y,” or anything else.
We could both reduce the bureaucratic burden on scientists and improve scientific reliability if we:
Stop making scientists fill out so many reports that no one reads, and instead
Dedicate even 0.1 percent of funding to random audits of the published scientific literature, including replication projects, checks for data quality vs. data manipulation, and more.
Speaking as a taxpayer, it doesn’t benefit me at all if 95-99% of the honest professors are forced to spend tons of time regurgitating their articles in NIH progress reports that are never used. What does bother me is if 1% or 5% of professors are doing fraudulent or irreproducible research. Thus, we should spend less money/time on making everyone fill out useless reports, and more money/time on actual quality control in the cases where it counts.
A tenth of one percent seems like a good starting point. For example, the Medicare/Medicaid budget for 2023 is a total of $1.4 trillion, while the portion devoted to detecting fraud and abuse (the Center for Program Integrity) is about $2.4 billion, or 0.17 percent of the overall budget. If NIH dedicated a mere 0.1 percent to the same objective, that could make a huge difference.
In short, we could kill two birds with one stone here: Stop imposing so many bureaucratic requirements and paperwork on scientists, and instead randomly audit a portion of labs or of the scientific literature for signs of poor research practices, irreproducibility, or fraud.
For more, see “Why We Need More Quality Control in Science Funding,” Good Science Project (Oct. 3, 2022).
Academic Funding
Sen. Cassidy asks how academic institutions typically fund researchers’ salaries, and how to think about salary reform. One key problem with biomedical research is the large dependence on so-called “soft money,” which means that a researcher is responsible for bringing in some substantial percentage of their own salary via grants. If they fail to get enough grants, they might literally lose their lab and part or more of their salary.
In other words, the true “risk” of “high risk” research has nothing to do with NIH or the universities—it’s the risk borne by the individual scientist. As a scientist at Stanford told me, “Universities are really involved with tracking conflicts of interest like outside funders (pharma), etc. But nobody tracks the real conflict of interest: Part of your salary depends on getting grants! That means putting food on the table for your family. It’s a huge existential threat!”
Or as an NIH Institute director told me in an interview, “The salary issue is one that I feel very strongly about. The growth of soft money salaries has been very negative. It’s reasonable for people to get a certain amount of salary from grants. But to be paying 70-90% of someone’s salary isn’t healthy for the system or for those investigators. You’re going to be much more careful about taking risks or about publishing honest conclusions if you’re worried about your salary. The university’s investment in a person and their commitment becomes less if they’re not salaried. That changes the sociology and the culture if everyone is on soft money.”
If we want scientists to be able to think freely about high-risk, transformational ideas, those scientists shouldn’t be worried that they might lose their job or house if they put forward an innovative idea and then miss out on a grant.
Worse than that, the soft money system encourages scientists to play it safe at all levels, and even incentivizes cutting corners or outright fraud. After all, if your job is at stake unless you come up with exciting results, and no one is checking whether those results are true, you might be tempted to invent some extra data.
Soft money serves no one’s interests except for the universities and academic medical centers that want to hire extra people while avoiding the responsibility and commitment to pay their salaries. NIH must try to move the biomedical establishment away from soft money over the next decade.
It will take a long time, because deeply-embedded financial relationships can’t be uprooted overnight. Which is why we have to start now. The most thoughtful and expert commentators have been pointing out this problem for way too long.
What should NIH do?
Bruce Alberts (former National Academies president and former Science editor) already proposed a solution: the NIH could “require that at least half of the salary of each principal investigator be paid by his or her institution, phasing in this requirement gradually over the next decade,” or that “the maximum amount of money that the NIH contributes to the salary of research faculty (its salary cap) could be sharply reduced over time, and/or an overhead cost penalty could be introduced in proportion to an institution's fraction of soft-money positions (replacing the overhead cost bonus that currently exists).”
Indeed, Alberts’ solution is too modest—NIH could require that 100% of research salaries be paid by the researcher’s institution over the next decade. Universities and medical schools would only hire faculty that they were prepared to support, and NIH grants could go towards supporting the extra expenses of research—setting up a lab, hiring a postdoc or a staff scientist, purchasing supplies, and the like. Of course, medical schools will complain to high heaven, but they will be doing so exclusively from financial self-interest, not from any evidence that soft money is actually better for anyone but themselves.
NIH and SBIR/STTR
Sen. Cassidy’s question about the efficacy of SBIR and STTR at NIH can be mostly answered by the National Academies’ evaluation that was published in 2022 at Congress’ behest.
That report found that while NIH’s programs have resulted in some marketable discoveries (patents, drugs, etc.), NIH’s 2-step review process is unnecessary and inefficient. That is, NIH treats SBIR/STTR as normal research grants, which means putting them through the normal peer review process—review by outside academics (usually with no industry experience) and then a second review by an Advisory Council.
As the National Academies put it, either NIH should stop classifying these as “research” grants or else Congress should explicitly grant a waiver that allows NIH to do so. The usual peer review process is far too slow and far too focused on traditional academic concerns to be useful for translational efforts (see pages 69-79 of the report).
Even worse, hardly anyone involved with this process at NIH has relevant and recent industry expertise. As the National Academies notes in its typically understated manner, “the expertise on review panels . . . may not be sufficient to meet the statutory purpose of the program,” and “increasing the number of SBIR/STTR program managers with biotech startup or industry experience might help ICs better evaluate the commercial potential of applications” (p. 80).
Finally, NIH’s timeline and Phase I award process—which can result in an award of an average $327,388 after a process that “averages about 9 months” (p. 77)—is too cheap and slow to be of much use.
Promising biotech startups can’t wait nearly a year to get funding that would barely pay for one or maybe two (underpaid) scientists’ salaries, unless they either 1) have no other option (in which case NIH is positively selecting for the worst biotech startups), or 2) have so many other funding options that the NIH funding is just gravy on top.
Based on the National Academies’ report, NIH should overhaul the SBIR/STTR program: It should hire more program managers with industry and VC-type experience, and should hand out fewer awards in larger amounts (say, at least $1 million) after a massively-streamlined review process. Alternatively, Congress could kill off this program at NIH altogether. EIther way, the current paradigm makes no sense.
The Italian paper referenced here does simulations to show that because luck is more important than talent, evening out research funding (spreading the money around, as the Canadians do) would yield more discoveries.
https://blogs.scientificamerican.com/beautiful-minds/the-role-of-luck-in-life-success-is-far-greater-than-we-realized/
While I agree completely with all the points made here, your arguments could have been even stronger by including the Department of Energy Office of Science. DOE OS has a similar budget to NSF and funds 70% of physical science research in the US, but it is unfortunately almost always left out of national conversations about federal science funding. Here are a few points of synergy:
* DOE OS also suffers from the issues of peer-reviewed grants and soft money
* DOE OS programs are managed by pairs of permanent staff and 2-year "detailees" from academia (similar to NSF's "rotators")