For context, start here with Part 1 of Ben Reinhardt’s monograph on Unbundling the University. See more of Ben’s work at Speculative Technologies.
1. Changes to the research ecosystem are bottlenecked by where the work is done
Our ability to generate and deploy new technologies is critical for the future. Why new technology matters depends on who you are: economists want to see total factor productivity increase, politicians want a powerful economy and military, nerds want more awesome sci-fi stuff, researchers want to be able to do their jobs, and everybody wants their children’s material life to improve.
Uncountable gallons of ink and man-hours of actual work have been poured into improving this system — from how papers are published and how grants are made to creating entirely new centers and accelerators. But most of these efforts to improve the system go to waste.
It is almost impossible to change a system when the people who are doing the actual work — the inventing and discovering — are still heavily embedded in the institutions that created the need for systemic improvement in the first place. To unpack that:
Universities (and academia more broadly) are taking over more and more work that doesn’t have immediate commercial applications. In other words, academia has developed a monopoly on pre- and non-commercial research.
The friction and constraints associated with university research have increased over time.
Combined, points #1 and #2 mean that you won’t be able to drastically improve how our research ecosystem works without drastically changing the university or building ways to fully route around it.
There are many reasons for doing research at universities. Universities have a lot of (often underused) equipment that is rare or expensive – there are a shockingly large number of pieces of equipment or tacit knowledge that only exist in one or two places in the world. Universities have graduate students and postdocs, who provide cheap labor in exchange for training. Perhaps most importantly, universities are where the people with experience doing research are: spinning up a new research location from scratch is slow and expensive; hiring people full-time locks you into research projects or directions.
Both for these concrete reasons and because it’s the cultural default, most efforts to enable pre-commercial research involve funding a university lab, building a university building, or starting a new university-affiliated center or institute. But doing so severely constrains speed, efficiency, and even the kind of work that can be done. (You can jump back to the executive summary for a blow-by-blow of how these constraints play out.)
Behind closed doors, even people in organizations like DARPA or ARPA-E will acknowledge that the frictions imposed by working via academic organizations limit their impact. Despite large budgets and significant leeway about how to spend them, the law requires ARPA program leaders to act through grants or contracts to existing institutions instead of hiring people directly. Those rules almost inevitably mean working with universities. Most new research organizations are no different: they still depend on people working for universities and in university research labs to do the actual hands-on work.
We often think of research as creating abstract knowledge, but the reality is that a lot of that knowledge is tacit – it lives only in people’s heads and hands. To a large extent, technology is people. If those people are working in an institution that judges them on novelty, they are going to build technology that is novel, not necessarily useful. If they are working in an institution that judges them on growth, margins, or relevance to existing products, they’re going to tune technology in those directions, rather than towards impact.
An aside: how technology happens and pre-commercial technology work
To really understand why a university monopoly is so bad for our ability to create and deploy new technology, it’s important to briefly unpack how technology actually happens and why there’s a big chunk of that work that isn’t done by rational, profit-seeking actors like companies.
How does technology happen?
Many people (even very smart, technically trained ones!) imagine that the way new technology happens is that some scientist is doing “ basic science ” — say, measuring the properties of Gila monster saliva because Gila monsters are freaking sweet — when all of a sudden they realize “aha! This molecule in Gila monster saliva might be very useful for lowering blood sugar!” The scientist (or maybe her buddy) then figures out how to make that molecule work in the human body with the idea that it will be a useful drug, ie. “ applied science .” Once that works they figure out how to package it up as a drug, get it FDA approved, and start selling it, ie. “ development ”. Once the drug is out in the world, people discover new applications, like suppressing appetite. This is indeed what happened with GLP-1 inhibitors.
That is not actually how most technology happens.
In reality, the process of creating new technology looks more like how the transistor (that drives all of modern computing) came to be: In the 1920s, Julius Lilienfeld (and others) realize that it would be pretty sweet if we could replace fiddly, expensive vacuum tubes with chunks of metal (ok technically metalloid). Several different groups spend years trying to get the metalloid to act like a vacuum tube and then realize that they’re getting nowhere and probably won’t make any headway just by trying stuff – they didn’t understand the physics of semiconductors well enough. Some Nobel-prize-winning physics later, the thing still doesn’t work without some clever technicians figuring out how to machine the metalloid just right. The “transistor” technically works then, but it’s not actually useful — it’s big, expensive, and fiddly. It takes other folks realizing that, if they’re going to make enough of the transistor to actually matter, they’ll need to completely change which metalloid they’re using and completely reinvent the process of making them. This process doesn’t look anything like a nice linear progression from basic research to applied research to development.
Technology happens through a messy mix of trying to build useful things, shoring up knowledge when you realize you don’t know enough about how the underlying phenomenon works, trying to make enough of the thing cheaply enough that people care, going back to the drawing board, tinkering with the entire process, and eventually coming up with a thing that has a combination of capabilities, price, and quantity that people actually want to use it. Sometimes this work looks like your classic scientist pipetting in a lab or scribbling on a whiteboard, sometimes it looks like your soot-covered technician struggling with a giant crucible of molten metal, and everything in between. All this work is connected in a network that almost looks like a metabolism in its complexity. (All credit for this analogy goes to the illustrious Tim Hwang.)
Ultimately, this work needs to culminate in a product that people beyond the technology’s creators can use: someone needs to buy manufactured technology eventually in a money-based economy. But the work to create a technology is often insufficient for a successful product: you need to do a lot of work to get the thing into the right form factor and sell it (or even give it away and have people actually use it). Very few people want to buy a single transistor or an internet protocol: they want a GPU they can plug straight into a motherboard or a web application.
(This is a speedrun of a much larger topic: if you want to go deeper, I recommend Cycles of Invention and Discovery and The Nature of Technology: What it is and How it Evolves as a start.)
Some of the work to create technology is a poor commercial investment (pre-commercial technology work)
Imagine putting all of the work to create a useful technology on a timeline. If you draw a line at some point in time, you can ask “what is the expected value for an investor of all the work after this point in time (assuming all future work is funded by investment and revenue)?” There will be some point in time where all future work will have a sufficiently large expected value that funding it will be a good investment. All the work before that point is pre-commercial technology research. In other words, pre-commercial technology research is work that has a positive expected value, but its externalities are large enough that private entities cannot capture enough value for funding that work to be a good investment.
Of course, reasonable people will disagree strongly over where that line is. Some people would argue that any valuable work should be a good investment – these folks would put the line at t=0. Others believe it’s after a successful demo, and still others believe it’s at the point where there’s a product to sell.
Of course, technology research doesn’t create a fixed amount of value that then gets divvied up between the public and individuals or organizations. A technology’s impact can vary based on where in its development investors and creators start expecting it to be a good investment and start working to capture value by starting companies, patenting, and selling products. Arguably, Google would have had far less impact if Larry Page and Sergey Brin had just open-sourced the algorithm instead of building a VC-backed startup around it; at the same time, transistors would arguably had far less impact if AT&T had imposed draconian licensing terms on them or not licensed them at all.
Frustratingly, there is no straightforward way to find the “correct” line between pre-commercial and commercial technology work in any given situation. It’s both wrong to say “everything should be open-source” and “any valuable technology work should be able to both make the world awesome and its inventors obscenely rich at the same time.” There is no easy answer to “ When should an idea that smells like research be a startup? ” or the related question, “When should a large company invest in technology research?”
In other words, the line between pre-commercial and commercial work is fuzzy and context-dependent. It depends heavily both on factors intrinsic to a technology and extrinsic factors like regulations, transaction costs, markets, and even (especially?) culture.
These factors change over time. In the late 19th century, George Eastman could start making camera components during nights and weekends in his mother’s kitchen with the equivalent of $95k (in 2024 dollars) of cash borrowed from a wealthy friend. He used that revenue to build more components, expand the business, eventually go full-time, and invent roll film. Today, some combination of overhead and development costs, combined with expectations around uncertainty, scale, timelines, polish, returns and other factors means that it can take hundreds of millions or billions of dollars to bring a product to market.
In the early 20th century, the stock market was basically gambling – most investors ended up breaking even or negative; in the late 20th century, the stock market reliably returned more than 10% annually. In the early 20th century, vehicles breaking down regularly (or exploding!) was a regular occurrence; now a plane crash is an international incident. Many of these changes are good, but they add up to a world where more technology work falls on the pre-commercial side of the pre-commercial/commercial divide.
This story skips a lot of details, counterarguments, and open questions. One should certainly ask “what would it take to make more technology work commercially viable?” It’s likely there are new organizational and financial structures, friction reductions, and cultural changes that could make more technology work commercially viable. But as it stands, pre-commercial technology work is more important than ever. At the same time, it is increasingly dominated by a single institution.
Academia has developed a monopoly on pre- and non-commercial research
In the 21st century, it’s almost impossible to avoid interfacing with academia if you have an ambitious pre-commercial research idea. This goes for both individuals and organizations: if you want to do ambitious pre-commercial research work, academia is the path of least resistance; if you want to fund or coordinate pre-commercial technology research the dominant model is to fund a lab, build a building, or start a new center or institute associated with a university.
A quick note on definitions : Academia is not just universities — Modern academia is a nebulous institution characterized by some combination of labs with PIs being judged on papers and labor being done by grad students.
You can think of academia as asserting its monopoly in four major areas (ordered in increasing levels of abstraction): physical space, funding, mindsets+skills+incentives, and how we structure research itself.
Physical space
If you have a project that requires specialized equipment or even just lab space, the dominant option is to use an academic lab. Companies with lab spaces rarely let anyone but their employees use them. You could get hired and try to start a project but companies have an increasingly shorter timescale and tighter focus that precludes a lot of pre-commercial research.
There are commercial lab spaces but they are prohibitively expensive without a budget that is hard to come by without venture funding or revenue. (That is, it’s hard to come by in the context of pre-commercial research!) Furthermore, most grants (especially from the government) preclude working in a rented lab because they require you to prove that you have an established lab space ready to go. The way to prove that you have that is a letter from an existing organization. And the only existing organizations that would sign that letter are universities.
Funding
Many research grants are explicitly for people associated with universities and have earmarks for funding graduate students. This $40 million funding call for new ways to create materials is one of many examples of both government and nonprofit funding that is explicitly only for professors or institutions of higher learning.
Two main reasonable-at-the-time factors created this situation:
Government research funding is explicitly dual-purpose: it’s both meant to support the actual research but also to train the next generation of technical talent. This combination made more sense before universities took on the role of “technology producing engine.”
Many funders don’t have the bandwidth to evaluate whether an individual or organization is qualified so they fall back on heuristics like “is this person a tenure-track professor at an accredited institution?”
As a result, many funding pathways are inaccessible for non-academic, non-profit-maximizing institutions. Restricting non-academic institutions’ ability to access funding further solidifies academia’s monopoly.
Mindsets, Skills, and Incentives
Most deep technical training still happens at universities. But a PhD program doesn’t just build technical knowledge and train hands-on skills; it inducts you into the academic mindset. It’s certainly possible to do a PhD without adopting this mindset but it’s an uphill battle. Our environment shapes our thoughts! Institutions shape how individuals interact! So we end up with a situation where everybody with deep technical training has been marinating in the academic mindset for years.
The dominance of the academic mindset in research has many downstream effects: some are pedestrian, like the prevalence of horrific styles in technical writing; others are profound, like prioritizing novelty as a metric for an idea’s quality.
The academic system also shapes the types of skills that these deep technical experts develop; among other things, it makes them very good at discovery and invention, but not necessarily scaling or implementation.
The academic mindset warps incentives far beyond universities. Researchers in many non-university organizations still play the academic incentives game both because they were all trained in academia and because “tenured professor at a top research university” is still the highest-status position in the research world. As a result, academic incentives still warp the work that people do at national labs, nonprofit research organizations, and even corporate research labs.
The structure of research
At the most abstract, academia has a monopoly on how society thinks about structuring research. Specifically, that the core unit of research is a principal investigator who is primarily responsible for coming up with research ideas and runs a lab staffed by anywhere from a few to a few dozen other people. This model is implicitly baked into everything from how we talk about research agendas, to how we deploy money, to how researchers think about their careers.
Academia’s monopoly on the people actually doing benchwork means that it’s very hard to shift incentives in the research ecosystem; most interventions still involve work done by people in the academic system. New institutes are housed at universities or have PIs who are also professors; new grant schemes, prizes or even funding agencies ultimately fund academics; the people joining new fields or using new ways of publishing are ultimately still embedded in academia.
In the not-so-distant past, non- and pre-commercial research happened across a number of different institutions with different sets of incentives: corporate research, small research organizations like BBN, inventors in their basements, gentlemen scholars, and others.
It’s funny that at the same time that pre-commercial research has become more important than ever, we’ve ended up in a world where it is dominated by a single institution that, far from specializing in this critical role, is a massive agglomeration of roles that have been acreting since the Middle Ages. The university’s monopoly on pre-commercial research is part of a much bigger story that you should care about even if you think research is doing fine . In order to actually unpack it, we need to go back to the 12th century, to understand where the university came from, and how it acquired the massive bundle of roles it has today.
[TO BE CONTINUED]
This is a very interesting series of articles. I am particularly interested in how you think new pre-commercial research funds would acquire the necessary capital.