Hello from Stuart Buck at the Good Science Project,
Two things you might be interested in reading:
1. Eric Gilliam, a fellow at the Good Science Project, just released a long essay exploring the history of MIT and its very hands-on approach to engineering. He argues that MIT has departed from its historical approach, and that there isn’t any good replacement among top universities that now tend to focus on abstract and theoretical work to the exclusion of hands-on experience. He suggests that this feeds into a larger problem: the loss of manufacturing expertise to other countries, which lessens our ability to innovate in the physical world. Thus, “we may need a new MIT now more than ever.”
It’s a great piece, and better yet, is the first in a three-part series. Stay tuned.
MIT Campus in 1865, from https://www.blackhistory.mit.edu/archive/early-mit-campus-ca-1865
2. James Pethokoukis at the American Enterprise Institute has a fascinating newsletter focused on technological and economic progress: Faster, Please! One of his regular features is “Five Quick Questions For [somebody],”
This week, he did 5 Quick Questions with me. It’s available at this link by subscription, although the full text is below. If you don’t have time to read the whole thing, my favorite part (if I say so myself) is this proposal to both reduce bureaucracy and improve reproducibility at the same time:
Perhaps ironically, a lot of the bureaucratic burden (discussed above) consists of making researchers file reports on what they did — reports that are then basically taken at face value. We could both reduce the bureaucratic burden on scientists and improve scientific reliability if we stopped making scientists fill out so many reports, and instead dedicated at least even just 0.1 percent of funding to independent audits of the published scientific literature.
A side note: In this newsletter edition, I've emailed some folks that I know personally and/or that follow me on Twitter. With the rise of social media algorithms that redirect readers to stuff that is viral or that has high engagement, fewer and fewer people see tweets about topics like reproducibility or scientific research--even though for many of you, that's probably the reason you followed me in the first place. In light of the disadvantage that the Twitter algorithm seems to place on certain topics, I've taken the liberty of assuming that if you followed me on Twitter, you might want to see my thoughts more than 5% of the time.
Anyway, here’s the whole thing from Faster, Please!:
***************************************************************************
Better technology policy isn’t just about funneling lots more taxpayer money toward more federal R&D, as the pending “chips and science” bill in Washington would do. It’s also important to institute reforms that would generate more bang for our bucks — especially more high-reward, frontier-pushing results. One analysis of this issue that I’ve frequently mentioned in Faster, Please! is the 2020 paper “Are ‘Flows of Ideas’ and ‘Research Productivity’ in secular decline?” by Peter Cauwels and Didier Sornette at ETH Zurich. The researchers conclude that there’s currently an unhealthy imbalance between three key drivers of technological progress: “the efficiency of the low-risk, mostly incremental, exploitative innovation, the serendipity of the medium-risk creative invention, and the boldness of the high-risk explorative discovery.” Too much incremental advance, too little bold discovery.
Fixing that R&D imbalance is one goal of Stuart Buck, executive director of the Good Science Project:
Our mission is to improve the funding and practice of science. Funding agencies should engage in bold experimentation to reduce bureaucracy, fund new ideas, and speed up innovation. Moreover, they should make much more data available so that independent scholars can evaluate the results.
Buck is also a senior advisor to the Social Science Research Council. Formerly, he was vice president of research at Arnold Ventures, where he was involved with many efforts to improve scientific reproducibility. In addition, Buck has a Substack, The Good Science Project Newsletter:
Here is a lightly-edited email chat between Stuart and me.
1/ How significant is the bureaucratic burden for scientists getting funded by government?
Official surveys of thousands of federally-funded scientists show that they spend upwards of 40 percent of their research time on administration, compliance, bureaucracy, etc. Maybe we could quibble with this, the response rate to the survey wasn’t anywhere near 100 percent, so perhaps the most aggrieved researchers are the ones who responded. So let’s discount it to 30 percent of researcher time. That still seems like an awful lot. It’s why scientists regularly say things like this on Twitter:
To be clear, the actual burden varies across disciplines, across departments, and across scientists. Some senior scientists are able to afford enough administrative staff that they are less affected. But even some senior scientists say the burden is way too high. As Nobel Laureate Thomas Südhof of Stanford told me, “The biggest problem is that administrators who do only administration simply do not understand how much the bureaucratic work takes away from the science. They seem to think that just filling out another form or writing another report is nothing, but if you have to do hundreds of these, that is all you have time to do.”
2/ Does that burden just make them less efficient or does it also make them less adventurous? Is too much cautious science getting funded as a result?
My hunch is that these factors are fairly orthogonal. For example, DARPA does fund a lot of high-risk science, but the administrative burden of keeping up with reports can take a lot of time. Nonetheless, the burden of bureaucracy probably does go far beyond what we typically discuss. Imagine that you had to spend two hours every day dealing with the Department of Motor Vehicles. The impact wouldn’t just be those literal two hours, it would be the distraction and dread you’d feel the rest of the day.
3/ How can government fund more high-risk, high-reward science?
Various agencies (including NSF and NIH) have tried to implement programs explicitly aimed at funding high-risk, high-reward science. Those programs should likely be expanded, but those programs will fall short unless there are significant cultural changes.
Agency leadership should:
1) Tolerate or even demand a higher rate of failure;
2) Give qualified program managers more authority to fund scientific projects (or people) without demanding a consensus from peer review; and
3) Expect that program managers actually use that authority.
To explain a bit further:
Why demand a higher rate of failure? If most research projects are expected to succeed, then the result is that most researchers will propose incremental projects (and often projects that they have mostly completed!), and/or exaggerate their success and impact in ways that hurt reproducibility. But if given more freedom to fail, researchers and program managers will be able to take more risks, and tell the truth about what happened.
As for peer review: In a highly competitive environment where most proposals don’t get funded, a consensus approach to peer review means that almost anything high-risk won’t get funded. Any high-risk project might have a few naysayers, after all.
Finally, agency leaders have to place a high value on actually following through.
Here’s a lesson from an NSF program, the Small Grants for Exploratory Research or SGER program, which was active from 1990 to 2006. That program gave NSF staff the authority to fund smaller grants (up to two years and $200,000) without going through the normal peer review process. An evaluation found that the program was “highly successful in supporting research projects that produced transformative results as measured by citations and as reported through expert interviews and a survey,” but the “funding mechanism was about 0.6 percent of the agency’s operating budget, meaning that the programme was operating far below the 5 percent of funds that could be committed to this activity.” In other words, NSF staff could have used their authority at least 8 times more often.
The same seems to be true of the NSF’s EAGER program (the current program for funding high-risk exploratory research outside of normal peer review). As reported in Science in 2014, “the 5-year-old program doles out only one-fifth of what some senior NSF officials think the foundation should be spending on EAGER grants. … The answer seems to be the absence of outside peer reviewers—generally considered the gold standard for awarding federal basic research grants. Many NSF program officers seem to be uncomfortable with that alteration to merit review. And so a mechanism designed to encourage unorthodox approaches is languishing because it is seen as going too far.” (See also page 51 of the NSF’s recent Merit Review Report, which shows that the EAGER program hasn’t really grown since 2014.)
To me, there’s not much point in establishing high-risk, high-reward programs if agency staff are too nervous to use that program very often. Agency leadership have to bend over backwards to assure staff that not only are they allowed to fund such research, they are expected to do so.
4/ My sense is that there seems to be more interest by policymakers these days in goal-directed science, science geared toward sectors thought to be promising — next-generation semiconductors, AI, quantum computing — than basic science. Why is basic, curiosity-driven science important?
Whether it’s a War on Cancer, a Human Genome Project, a BRAIN Initiative, or a Cancer Moonshot, you’ve pointed to a perennial phenomenon. It’s understandable: both politicians and the public like to see real progress being made from all the tens of billions they are asked to spend every year. But it’s important to keep in mind that many scientific breakthroughs come from serendipitous places that could never be fully planned. Indeed, many of the most important discoveries can be traced to basic science that at the time seemed frivolous or unfundable.
Just one of many possible examples: CRISPR (the gene-editing technique) can be traced in part to a 1993 publication by Francisco Mojica, who was just finishing up his doctorate at the University of Alicante in Spain. He was studying archaebacteria that thrive in salty environments, and noticed some patterns in their genome that he later named CRISPR (clustered regularly interspaced short palindromic repeats).
As he told an interviewer in 2019, “It was absolutely impossible to anticipate the huge revolution that we are enjoying nowadays.” Not only that, he had a lot of trouble getting funding: “When we didn't have any idea about the role these systems played, we applied for financial support from the Spanish government for our research. The government received complaints about the application and, subsequently, I was unable to get any financial support for many years.”
Paradoxically, while it’s important to fund some research that has an immediate path to impact, focusing too much on impact would actually lead to less impact in the long run. We need to make sure that a significant part of any broad science funding portfolio includes space for truly fundamental research, even if it strikes some people as useless or trivial at the time.
5/ What should science funding do about the replication crisis?
Just in the past week or so, we’ve learned that a significant line of Alzheimer’s research may have been faked, and that there were several instances of plagiarism and “data falsification” in work from a cancer lab that had received over $100 million in federal funding (!) over the years. There have been far too many cases like this over the years, and what’s worse, the cases that we know of are probably just the tip of the iceberg (there are lots of ways to fake your data or analysis in ways that would be less obvious than the cases that do get caught).
Imagine a world with no police or prosecutors. That basically describes academic research. No one proactively looks for fraud, bogus data, inaccurate analysis, etc., except for a handful of dedicated folks. Yes, there are “offices of research integrity” at the federal and university level, but as far as I can tell, they aren’t proactive at all, and are far too slow to act even in cases where the evidence strikes everyone else as obvious.
Perhaps ironically, a lot of the bureaucratic burden (discussed above) consists of making researchers file reports on what they did — reports that are then basically taken at face value. We could both reduce the bureaucratic burden on scientists and improve scientific reliability if we stopped making scientists fill out so many reports, and instead dedicated at least even just 0.1 percent of funding to independent audits of the published scientific literature, including replication projects (such as the Reproducibility Project in Cancer Biology, which I funded while at Arnold Ventures), checks for data quality vs. data manipulation, and more.
A tenth of one percent seems like a good starting point. For example, the Medicare/Medicaid budget for 2023 is a total of $1.4 trillion, while the portion devoted to detecting fraud and abuse (the Center for Program Integrity) is about $2.4 billion, or 0.17 percent of the overall budget. If NIH and NSF dedicated a mere 0.1 percent to the same objective, that could make a huge difference.
⭐ Bonus: What do you think of the idea of establishing R&D hubs across the nation? To me, the idea seems more about attracting political support for more funding than doing better R&D.
Your intuition seems mostly correct to me, and from a purely scientific point of view, we shouldn’t establish an R&D hub in South Dakota just for the sake of doing something in South Dakota. Nonetheless, government funding is inherently political, and it seems important to maintain political buy-in. Indeed, one could go further than that: Science funding has historically been bipartisan. (In 1999, Newt Gingrich wrote that the “highest investment priority in Washington should be to double the federal budget for scientific research.”) But these days, there is arguably a growing divide between federal science agencies and a substantial portion of the American public that is suspicious about COVID policies, vaccines, etc., not to mention climate change and more.
There are rational reasons, of course, to disagree with how the CDC and FDA have handled COVID, and to wonder whether the NIH unwisely funded gain-of-function research. But in a time of deepening political division, it seems dangerous if national science policy in the US starts to be viewed as a partisan issue with widespread distrust. That’s a good reason for agencies to take actions (like investigating reproducibility) that would help them deserve trust. But spreading at least a little scientific funding around the country could also help build a broader base of appreciation for and willingness to keep funding science.