Interview: Jeffrey Marqusee (formerly a DOD research funder)
Jeff Marqusee is a nationally known expert in environmental and energy science and technology with more than 20 years of experience in leadership roles in research, technology development, and policy aimed at making DOD a more sustainable and effective organization. At DOD, he was the first director of the Environmental Security Technology Certification Program and also led the Strategic Environmental Research and Development Program.
His expertise spans climate adaptation, resiliency, energy, and sustainable buildings, with a focus on innovations to improve DOD’s environmental and energy performance, reduce its costs, and enhance its mission capabilities. He has a Ph.D. from MIT in Physical Chemistry and currently serves as a senior research adviser for DOE’s National Renewable Energy Laboratory.
Can you briefly describe your research career at DOD?
I spent my career at the Department of Defense where I led our research programs on environmental and energy issues. Our aim was to identify and research the most promising technologies that would serve DOD’s environmental priorities. For example, our program supported R&D on renewable “microgrids” for storing and distributing energy so that DOD wouldn’t be so dependent on diesel generators in the case of extended outages.
Other examples include:
R&D on green alternatives for aviation components that would eliminate exposure to carcinogenic materials. Technologies that came out of the program are used across DOD now as well as commercial aviation.
R&D that qualitatively changed how we remediate contaminated groundwater. DOD and the private sector use the results across the country. In a roughly a decade, cutting-edge research published in Science went from the laboratory to widespread deployment in a highly regulated industry.
How did your program solicit and fund so-called “high risk” research proposals?
In our science research program, all proposals were peer-reviewed. But both my program managers and I began to feel we weren’t taking enough risk. Too many of the proposals were limited to incremental ideas.
With the concurrence of our external science advisory board, we set aside some funding for high-risk research. We issued either calls dedicated to only high-risk approaches or allowed investigators to submit high-risk proposals to our regular calls.
The high-risk proposals were not peer-reviewed. Instead, they were reviewed only by myself and the appropriate program manager. We all had advanced science degrees but were not necessarily experts in the specific field.
We would provide limited funding for 12 to 18 months to develop initial data or evidence that the idea had potential. If it did, the investigator could submit a multi-year large proposal which would be sent to peer review before being funded.
Were there any anomalies in how peer reviewers viewed such proposals?
Indeed.
The first year of this new process, we accidentally conducted an experiment. A high-risk proposal came in that produced extremely promising results beyond our expectations. We asked for a follow-on proposal and sent it to multiple peer reviewers.
But because it was the first time, we forgot to send the results of the one year of work which had not been published.
Every review came back with phrases like “excellent team,” or “important problem,” but then would conclude: “IMPOSSIBLE: DO NOT FUND.”
We quickly realized our mistake and sent out the report on the investigation’s first year of work.
All the peer reviewers came back saying, “Wow, you must fund this.”
How did this experience affect your view of how best to fund research?
This unintended experiment had a big impact on how I ran the programs and led us to make investments that I think had much bigger impacts than you typically see in funded efforts.
Can you say more about that? In what way can you compare results to NSF?
NSF’s mission and our mission were not identical, but we often funded the same investigators and tackled similar problems. Like NSF we funded basic research, but we linked it to later stage development and eventually to demonstrations. Unlike NSF we had a client who needed the results of our R&D both in the near and long term. This allows us to create a feedback loop between research, development, demonstrations, and end-users that accelerated the transition of knowledge and provided important input back to the research community.
In addition, we would fund R&D projects at a much higher level than most NSF grants. I believe that the impact per dollar grows as you fund larger projects that can integrate bigger teams and take greater risks. Particularly as R&D has gotten more expensive, funding projects at modest levels, as NSF often does, chews up the time and resources of investigators. But funding larger projects requires one to be more strategic.
What do you think about peer review in light of the above experiences?
Peer review is an incredibly useful tool. But it is only a tool and should not be used by itself to make decisions. Peer review avoids risks and tends to favor traditional disciplinary lines of investigation. It also is very poor at judging the ultimate value of an R&D project’s potential future success.
In some sense there are two end points in the use of peer review (or lack of it) in funding R&D. The NSF model makes its funding decisions driven by peer review. The DARPA model does not use peer review in the traditional sense (and certainly not to make funding decisions), and only funds high-risk, game-changing R&D.
I think important R&D, in a mission sense, includes both high-risk and incremental investments. I tried to structure a program that included peer review but allowed risk taking bounded by good science and engineering.