Back in 1980, Berkeley physicist Richard Muller published a piece in Science that is one of the most compelling articles I’ve ever read as to how government funding affects scientific innovation. The title is “Innovation and Scientific Funding,” and I retyped some of the passages here for easier access.
The first notable quotation, as to why bureaucratic requirements—no matter how well-motivated—can be a death-knell for good research.
The periods of preparation and incubation are the most fragile in the innovation process, and more attention should be paid to them. Many of the procedures followed in the scientific funding process have the unintended effect of suppressing these stages. To stop the growth of a tree it is not necessary to chop the tree down; it is sufficient to continuously clip off the top. The procedures and restrictions that do the damage were created to achieve a measurably good effect while causing unmeasurably small harm. One of the obstacles to scientific innovation in the United States may be the cumulative effect of many regulations, each one of which does unmeasurably" small harm.
Exactly. Which is why it’s so hard to make reforms here. For any regulation or bureaucratic requirement you can name, someone will say it is an absolutely essential element of oversight that poses but a trivial imposition. But add up all of these requirements, and the cumulative effect is devastating.
Muller went on to give two examples of how this plays out:
When E. O. Lawrence was the director of the Radiation Laboratory at the University of California, Berkeley, he encouraged his graduate students to practice machining in the shops after hours. He knew that they would become expert machinists much more quickly if they took this opportunity to work on personal projects. Wear and tear on the machining tools would be negligible and the skill gained would improve research.
Now government law prohibits this effective learning method. As a result, few scientists are proficient machinists, and few learn the capabilities and limitations of machine shop tools. Without this knowledge (acquired during the scientists' spare time), the scientist is unlikely to be able to design state-of-the-art hardware.
Restrictions on foreign travel also have a severe effect on innovation. Science is international in scope, and participation in foreign conferences is exceedingly important in the preparation stage. The number of experts in a given area is small; topical conferences provide an excellent way to meet and talk with them. Yet foreign travel is strictly limited, and that which is allowed is encumbered by special restrictions (for example, U.S. carriers must be used) unless the inconvenience is substantial.
The importance to my research of several international conferences is clear to me, yet I attend such meetings far more rarely than I should. I do not know whether the restrictions on foreign travel were created to save money, benefit U.S. airlines and the balance of trade, or prevent the appearance of a boondoggle.
But I am sure that a cost-benefit analysis would show the foolishness of these restrictions when applied to basic research, especially if the substantial harm to preparation could somehow be quantified.
Muller then tackles the problem of grantsmanship versus actual research skills:
The most fundamental mistake made by the funding agencies is in assuming that the ability to write good proposals is equivalent to the ability to accomplish good research. In response to a query I made to the NSF, I was told that a proposal should be as "polished" as a paper published in a major journal. Referees frequently expect all potential problems to be identified and their solutions outlined.
Unfortunately, it is not an exaggeration to say that the agencies expect a proposal to outline the anticipated discoveries. We should not expect research proposals to read like engineering proposals. To require that the solutions to all problems be obvious before the research is begun discriminates strongly against innovative work.
Finally, Muller sketched out some ideas as to how we ought to reward government program officers:
As I mentioned earlier, my own best work was begun during periods when it might have looked to an outsider that I was wasting time. A physicist's career is judged by his peers on the basis of his accomplishments, not his efficiency. We should apply the same principle to the funding of science. A funding agency should not be criticized for its mistakes if it has a good record of taking risks that bore fruit. In fact, one should regard with suspicion a funding agency whose projects always succeed, since constant success may indicate an overly cautious approach. It is easy to fund the established scientist who continues to work in his established field. It is risky to fund the scientist working in an area that is not yet established, or a young scientist working in a field that has many experienced researchers. When Warren Weaver retired as head of the Rockefeller Foundation, he said that his proudest achievement was that he had given substantial research support to all the Nobel Prize winners in medicine and physiology before they won the awards. . . .
In U.S. funding agencies there appears to be little reward for initiative; on the contrary, the contract monitors can get into trouble for making a decision that might be counter to some official policy. The dreaded result of funding a project far from the mainstream of scientific work is a Golden Fleece Award. There are a plethora of rules and regulations that must be followed, and it is safer to turn down requests (or to delay them by submitting them to superiors for approval) than to take a chance. Taking a risk by funding an innovative project can lead to trouble, and there are many projects that are risk-free and whose support can easily be defended. . . . Not only should we stop punishing those who support innovative research, we should encourage and reward them.
Perhaps the best way to do this would be to give special recognition-a small cash reward, for example-to monitors who have done a particularly good job in supporting innovative research. This would not only reward the monitor, but increase his prestige and alert others to the importance of recognizing and supporting innovation. Anybody could nominate a monitor, including scientists or superiors in the funding agency, but the award committee should be composed of scientists familiar with the problems of innovation and of those persons in the funding agencies most familiar with the problems of funding science. There might be a similar award for those who distribute money locally at the national laboratories.
This latter idea is brilliant, and has been echoed more recently by others:
Michael Nielsen and Kanjun Qiu:
A "Nobel prize" for funders: The early stages of important discoveries often look strange and illegible: people grappling with fundamental ideas in ways at the margin of, or outside, conventional wisdom. Since such projects often look like anything but sure bets, there is a strong incentive for funders to delay support – just when it is most needed – in order to avoid looking foolish. This is especially true of individual program managers, who naturally shy away from funding things that may later seem silly or frivolous. It's striking to contrast this situation with venture capital, where there is a strong incentive to fund in the earliest stages, when stock is cheap because of the uncertainty; the net result is more chance of looking silly when things fail, but also a much larger windfall if things work out. It's interesting to think about ways of rewarding the science funders – especially individuals – who are first to put their own reputations on the line to support such projects. This can be done in many ways: one natural way is to create one or more prizes to publicly recognize such brave funders.
As a society and a political system, we need to develop a better set of antibodies to the opportunism that leaps on each failure and thereby smothers success. We need the political will to fail. Finding stories of success will help, yes, but at a deeper level we need to valorize stories of intelligible failure. One idea might be to launch a prestigious award for program managers who took a high-upside bet that nonetheless failed, and give them a public platform to discuss why the opportunity was worth taking a shot on and what they learned from the process.
Seems long past time that we tried out this idea as a way of reducing risk aversion amongst scientific funders.
The more things change, the more they stay the same.