The Paradox of Progress: Trying to Predict Impact Often Prevents the Highest Impact
Guest article by Aishwarya Khanduja
Aishwarya is a curiosity-driven systems thinker working at the intersection of technology and society. Her interests span human-AI collaboration, social technology, knowledge systems, and complex adaptive systems.
A Cambridge alumna and Cosmos Ventures grantee, she's the founder of The Analogue Group, an R&D fund through which she's reimagining how we approach intractable problems by embracing complexity.
In 1999, Google's founders couldn't sell their company for $1 million.1 In 1995, Katalin Karikó faced demotion at the University of Pennsylvania, her mRNA research repeatedly rejected for funding.2 But today, Google's parent company is worth over a trillion dollars, and Karikó's work formed the foundation for COVID-19 vaccines that helped end a global pandemic.
These stories illuminate a fundamental paradox in how we approach scientific progress and innovation funding: the highest impact often has nothing to do with predicted impact at the time.
The Problem with Predictability
Our current system of scientific funding operates on what might be called the "train schedule" model of progress: we expect innovations to arrive on time, following predetermined routes, with clear destinations. This manifests in several ways:
Funding agencies demand detailed roadmaps and virtually guaranteed outcomes before work begins
Grant proposals must specify exact deliverables and timelines
Political pressure pushes agencies toward "safe" incremental research
Researchers even need to demonstrate preliminary results before receiving funding to generate those very results
The irony is stark: we've created a system that would have rejected many of history's most transformative breakthroughs. Consider examples like these (there are many more):
Douglas Prasher's work on fluorescent protein, crucial to Nobel-prize winning research, went unfunded, leading him to drive a courtesy car for a living3
Robert Langer at MIT faced rejection on his first nine grant proposals for work on biodegradable polymers4
The team that discovered how to manufacture human insulin was rejected because their work seemed "extremely complex and time-consuming"5
Craig Venter's proposal for whole genome shotgun sequencing was rejected with claims it wouldn't work—after he had nearly completed the genome6
The Political Dimension: From Speed to Scrutiny
The story of American innovation reveals a stark contrast between past and present approaches to funding and executing ambitious projects. During World War II and the early Cold War, America demonstrated an extraordinary ability to move quickly and take risks. The Pentagon rose from nothing in just 16 months. The Manhattan Project transformed theoretical physics into world-altering technology in less than four years. The Apollo Program went from concept to moon landing in under a decade. At Lockheed's Skunkworks, Kelly Johnson designed the SR-71 Blackbird—still the fastest manned aircraft ever made—using pencils and slide rules.
This era of rapid innovation wasn't just about isolated projects. In the 1950s alone,7 America developed five generations of fighter jets, three generations of manned bombers, two classes of aircraft carriers, submarine-launched ballistic missiles, and nuclear-powered attack submarines. The key was prioritizing speed and breakthrough results over procedural concerns. As Christian Brose notes in "The Kill Chain," when America was serious about progress, "The paramount concern was picking winners: the priorities that were more important than anything else, the people who could succeed where others could not, and the industrialists who could quickly build amazing technology that worked."
But at least as to defense, this changed dramatically in the mid-1960s. Under Defense Secretary Robert McNamara, the introduction of the Planning, Programming, and Budgeting System (PPBS) marked a shift toward bureaucratic control and risk aversion. This system, designed to eliminate wasteful spending, instead created a labyrinth of requirements, planning processes, and resource allocation procedures that made rapid innovation nearly impossible.
The same phenomenon is likely true as to other scientific funding agencies (including NIH and NSF), where a risk-averse approach to funding has been further exacerbated by political pressures. From 1975 to 1988, Senator William Proxmire's "Golden Fleece Awards" publicly mocked seemingly frivolous research projects, creating a chilling effect on funding for unconventional ideas. His targets included:
NSF research on alcohol's effects on fish behavior
Studies of social dynamics in various settings
Basic research that appeared to lack immediate practical applications
This political theater ignored a crucial truth: breakthrough innovations often emerge from research that initially appears irrelevant or even absurd. The impact extended far beyond the specific projects Proxmire targeted, creating what scientists now call "the Proxmire Effect"—a systemic bias toward research that's easy to explain and defend to lay audiences, regardless of its actual scientific merit. The new Department of Government Efficiency has shown an unfortunate tendency thus far to mock isolated studies based on one-line summaries.
Today's funding agencies feel intense pressure to fund only research that is easy to explain and defend to lay members of the public and to Members of Congress, the vast majority of whom are not trained as scientists. This has led to several problematic outcomes:
Self-Censorship: Researchers frame their work in increasingly conservative terms, avoiding novel or unconventional approaches.
Risk-Averse Funding: Institutions prefer incremental research with guaranteed outcomes over potentially transformative projects with uncertain results.
Political Cover Over Merit: Program officers prioritize projects they can easily defend to congressional oversight committees rather than those with the greatest scientific potential.
The cruel irony is that this system would have rejected many of history's most transformative breakthroughs. Alexander Fleming's "messy" lab work led to penicillin. GPS technology emerged from basic research into satellite signals. Even the discovery of cosmic microwave background radiation—crucial to our understanding of the universe's origin—began with scientists investigating interference in telephone signals.
The Adjacent Possible: A Better Model for Progress
Rather than forcing innovation into predetermined paths, we need to embrace what scientists call "the adjacent possible"—the realm of what's just beyond our current capabilities but within reach. This concept helps explain why genuine breakthroughs often appear obvious in hindsight: they were always there, waiting for someone to connect the dots.
Consider Google's evolution. Their journey from search engine to AI powerhouse wasn't planned—it emerged organically as they built capabilities in computing infrastructure and machine learning. These developments later enabled breakthroughs like AlphaFold, which revolutionized our understanding of protein structures.
Toward a New Funding Paradigm
To foster transformative innovation, we need to fundamentally reshape our approach to scientific funding:
Embrace Uncertainty: Fund talented people and interesting ideas, even when the outcomes aren't clear. Move away from the funding models that led Mina Bissell to say, "If you have an original idea or you're really making a huge jump, you should expect not to get funded. If you do, it means people already largely understand it."
Reduce Political Pressure: Congress and the public must understand that breakthrough science often looks weird or wasteful at first glance. We need political cover for all funding agencies (not just DARPA) to take calculated risks.
Balance Portfolio Risk: Just as venture capitalists expect many failures to enable a few huge successes, funding agencies should maintain portfolios that include high-risk, high-reward projects.
Reform Peer Review: Current peer review systems often favor incremental progress over revolutionary ideas. Karim Lakhani and his colleagues showed in a randomized experiment that peer reviewers are biased towards negativity, leading to “more conservative allocation decisions that favor protecting against failure rather than maximizing success.” As Nobel laureate Roger Kornberg observed, "If the work that you propose to do isn't virtually certain of success, then it won't be funded."8
Create Space for Serendipity: Fund diverse approaches and create environments where different disciplines can collide and combine in unexpected ways.
Learning from Our Mistakes
The science funding community needs systematic study of its "anti-portfolio"—the breakthrough projects it initially rejected. This analysis could reveal patterns and help develop better funding mechanisms. As one Nobel Prize winner in chemistry noted, the current system requires "sufficient early data and preliminary results to convince the study section that it was worth funding"—a catch-22 that stifles truly novel research.
The messiness of progress is a feature, not a bug
The messiness of progress isn't a bug—it's a feature. The most transformative breakthroughs often emerge from the freedom to explore unexpected paths, from the courage to pursue questions without guaranteed answers, and from the very projects our traditional systems reject.
To truly capitalize on human ingenuity, we must create funding systems that embrace this reality. This means tolerating greater risk-taking, providing political cover for funding agencies, and recognizing that the path to breakthrough innovation is rarely straight or predictable.
The future of scientific progress depends not on our ability to control and predict outcomes, but on our willingness to create environments where the unexpected can flourish. In embracing this messier, more organic approach to progress, we might finally unlock the solutions to our most pressing challenges.
[source] In early 1999, Page and Brin approached Excite, a popular web portal at the time, with an offer to sell Google for $1 million. Excite's CEO George Bell rejected the offer outright. Vinod Khosla, one of Excite's venture capitalists, even managed to negotiate the price down to $750,000, but Bell still declined. The primary reason for Excite's refusal wasn't the price tag, but rather Google's insistence on replacing Excite's existing search technology. According to Bell, "Larry Page insisted that we have to rip out all of the Excite search technology and replace it with Google". This demand was seen as too disruptive to Excite's existing operations and culture. This rejection turned out to be one of the most significant missed opportunities in tech history. Google went on to secure $25 million in Series A funding just a few months later in June 1999. By 2004, when Google went public, the company was valued at over $23 billion. Today, Google's parent company Alphabet has a market capitalization of over $1.6 trillion.
[source] In 1995, UPenn gave Karikó an ultimatum: abandon her mRNA research or face demotion and a pay cut. The university was dissatisfied with Karikó's inability to generate adequate funding for her research, which at the time was considered impractical by many in the scientific community. As a result, Karikó was demoted from her position, which typically would have led to a faculty role.
Karikó faced persistent difficulties in securing grants for her mRNA research:
She submitted numerous grant applications, all of which were rejected.
Her salary remained stagnant for years, even as costs increased, leaving her earning less than lab technicians.
Throughout her career, Karikó was never awarded a single major grant from the National Institutes of Health.
[source] In the late 1980s, Douglas Prasher was the first to realize the potential of Green Fluorescent Protein (GFP) as a tracer molecule. He successfully cloned the GFP gene while working at Woods Hole Oceanographic Institution, a crucial first step in using GFP as a tracer in organisms other than jellyfish. Despite the promise of his research, Prasher faced significant funding obstacles:
His application to the National Institutes of Health for funding was rejected.
By 1991, he was unable to secure further research funding, forcing him to leave academia.
His initial grant from the American Cancer Society of $220,000 in 1988 dried up after four years, preventing him from continuing his research.
Robert Langer's early career at MIT was marked by significant challenges in securing funding for his groundbreaking work on biodegradable polymers for drug delivery. This period of rejection and perseverance highlights the often difficult path of innovative research. Langer faced repeated rejections when applying for grants to support his research on biodegradable polymers:
His first nine research grant applications to the National Institutes of Health (NIH) were turned down.
Reviewers were skeptical about the feasibility of synthesizing the proposed polymers.
Even after successfully synthesizing the polymers, grant reviewers raised concerns about potential reactions with drug molecules.
In 1976, a team of scientists led by Herbert Boyer at Genentech and Arthur Riggs at City of Hope National Medical Center proposed a groundbreaking method to produce human insulin using recombinant DNA technology. However, their initial grant application to fund this research was rejected. The reviewers of their grant application stated that the project was "too complex and couldn't possibly be completed in 3 years". This rejection exemplifies how revolutionary ideas can sometimes be met with skepticism due to their perceived complexity or ambitious timelines.
In the mid-1990s, Venter applied for an NIH grant to use whole genome shotgun sequencing on Haemophilus influenzae. However, the NIH rejected his proposal, claiming that the method would not work. This rejection came despite the fact that Venter had already made significant progress on the project.
This paragraph is inspired from Jeremy Stern’s profile on Palmer Luckey, which also touches on the topics of American Dynamism.
The current peer review system, while designed to ensure scientific rigor, has become a significant barrier to transformative research. Traditional peer review often operates as a consensus-seeking mechanism, inadvertently filtering out the very proposals that might lead to breakthrough discoveries. The challenge lies not just in identifying quality research, but in recognizing potentially transformative ideas that may appear risky or unconventional at first glance.
Modern peer review systems need to embrace a multi-track approach. High-risk, high-reward proposals could be evaluated through specialized panels that include both domain experts and innovative thinkers from adjacent fields, bringing diverse perspectives to the evaluation process. For instance, the Howard Hughes Medical Institute's review process incorporates "risk panels" specifically tasked with identifying promising but unconventional research directions. Agencies could also do more to explore different ways to weight upvotes versus downvotes–the National Science Foundation, for example, is trying a “golden ticket” model in which one emphatically positive vote could outweigh other low ratings, thus providing an opportunity for outside-the-box ideas to get funded if just one peer reviewer loves them.
Technology can also transform the peer review process. Machine learning algorithms could help identify patterns in successful breakthrough research, while blockchain-based systems could create transparent, tamper-proof review records. Digital platforms could enable rapid feedback loops between reviewers and researchers, allowing for iterative improvement of proposals rather than simple binary accept/reject decisions.
A reformed peer review system should also address the "experience bias" that often disadvantages early-career researchers. This could include anonymous first-round reviews, where proposals are evaluated solely on their merit without institutional affiliations or researcher track records. The European Research Council has experimented with this approach, leading to increased funding for innovative young researchers. Additionally, review panels could include early-career scientists, whose fresh perspectives often help identify cutting-edge research directions.
From my shott experience within the eu R&D funding bubble it's more nuanced: There are simply way(!) too many good proposals for the limited finances to distribute. So they also take unrelated criteria to decide. Like having women as PI's are a new trick to get the funding among equal competitors i recently heard. Existing research clusters with industry also get priority. Not just evidence based approaches. Politics play a bigger role in this vacuum of indecisiveness than one thinks.
Maybe we should track all research proposals and the peoples careers behind them for a while to show that it's simply not enough money? And then we can think better of evidence for distributing limited resources better.
Nice diagnosis of a serious problem. Your recommended solution - 'finding talent', but how do we find it, if not by peer review? One comment: "We need political cover for all funding agencies (not just DARPA) to take calculated risks." Calculated? - rather uncalculable.I appreciate that you raise these issues.