This essay is a personal history of the $60+ million I allocated to metascience starting in 2012 while working for the Arnold Foundation (now Arnold Ventures).
Brilliant piece. I've never, as a working scientist, understood the whole resistance to replication 'mindset' (a phrase I use colloquially 😉, given recent controversies). The greatest thrill you can have is that someone else gets the same result as you did: it means you have either discovered something about the world that is independent of you, or you've both been confounded by the same artifact. One hopes for the former, but it can be the latter. It also means you've potentially discovered something that others regard as worth probing further. Amazing and wonderful venture: congratulations.
The most important philanthropy of the 21st century - if we can't trust science, everything else will collapse. And existing peer review, prestige systems, and funding agencies were not addressing these issues.
Thanks for this excellent post! Some miscellaneous thoughts:
1. The pyramid is great, and I plan to annoy management at my institute (and maybe even at funding agencies!) with it for many years to come. Inverting the pyramid is an epidemic among bureaucrats.
2. I would be curious to see your perspective on the Nutrition Science Initiative and the struggles of nutrition science more broadly. I'm usually quite critical of nutrition science, the whole field of which seems to be a textbook case of trying to examine small, confounded effects with inadequate sample sizes and insufficient controls.
3. As a practicing scientist, this line from the Open Science Framework:
> allowing open, continuous peer review
might be a contender for my new favorite definition of hell. Peer review is often worthless (see e.g. https://www.experimental-history.com/p/the-rise-and-fall-of-peer-review); I can personally think of a few cases where a referee has improved one of my papers, but this is so rare I can count the cases on one hand. I'm not sure what continuous peer review is supposed to accomplish besides making everyone hate each other as well as their own research (projects need to end eventually). If the actual goal is for scientists to give worthwhile feedback on ideas even after they've been published, I think that should happen through subsequent publications, i.e. via citations. Technology could help assign valence to citations, since a paper can also be cited by someone rebutting it. (We might also need a category of neutral citations, i.e. the usually irrelevant "related research" citations in the introduction that are included for political reasons.) Otherwise, I don't really see the need for a new system here; we should just make better, more effective use of the system we already have. (See also the freak takes article on how journal publication used to be just one step in evaluating an idea: https://www.freaktakes.com/p/feynman-on-journal-reviews-conferences).
This article is in total denial of the ugly side of the Open Science Movement. I was working primarily in health psychology/ behavioral health and witnessed the savaging of those of who of us made early requestes for release of data. Large behavioral trials with long follow ups cannot wait for replication. Armed with Koch money, which Stuart may have shared, Brian Nosek engaging in bullying those of us who saw him as needlessly bureaucratizing research when speedy removal of bad science from the literature would have been much more efficient and effective.
Brian did not support release of the PACE trial data, even though the investigators had promised to make it available as a condition of publishing in PLOS One.
The erasure of the PACE trial from Stuart seriously damages the credibility of his history.
Pre-registration was already a failure in behavioral medicine before Nosek promoted it in psychology.
Brilliant piece. I've never, as a working scientist, understood the whole resistance to replication 'mindset' (a phrase I use colloquially 😉, given recent controversies). The greatest thrill you can have is that someone else gets the same result as you did: it means you have either discovered something about the world that is independent of you, or you've both been confounded by the same artifact. One hopes for the former, but it can be the latter. It also means you've potentially discovered something that others regard as worth probing further. Amazing and wonderful venture: congratulations.
What a run!
Maybe for a future post... how would you train or advise a group of new metascience VCs? What advice would you give?
Good question! Let's catch up by phone soon?
The most important philanthropy of the 21st century - if we can't trust science, everything else will collapse. And existing peer review, prestige systems, and funding agencies were not addressing these issues.
Thanks for this excellent post! Some miscellaneous thoughts:
1. The pyramid is great, and I plan to annoy management at my institute (and maybe even at funding agencies!) with it for many years to come. Inverting the pyramid is an epidemic among bureaucrats.
2. I would be curious to see your perspective on the Nutrition Science Initiative and the struggles of nutrition science more broadly. I'm usually quite critical of nutrition science, the whole field of which seems to be a textbook case of trying to examine small, confounded effects with inadequate sample sizes and insufficient controls.
3. As a practicing scientist, this line from the Open Science Framework:
> allowing open, continuous peer review
might be a contender for my new favorite definition of hell. Peer review is often worthless (see e.g. https://www.experimental-history.com/p/the-rise-and-fall-of-peer-review); I can personally think of a few cases where a referee has improved one of my papers, but this is so rare I can count the cases on one hand. I'm not sure what continuous peer review is supposed to accomplish besides making everyone hate each other as well as their own research (projects need to end eventually). If the actual goal is for scientists to give worthwhile feedback on ideas even after they've been published, I think that should happen through subsequent publications, i.e. via citations. Technology could help assign valence to citations, since a paper can also be cited by someone rebutting it. (We might also need a category of neutral citations, i.e. the usually irrelevant "related research" citations in the introduction that are included for political reasons.) Otherwise, I don't really see the need for a new system here; we should just make better, more effective use of the system we already have. (See also the freak takes article on how journal publication used to be just one step in evaluating an idea: https://www.freaktakes.com/p/feynman-on-journal-reviews-conferences).
3.
This article is in total denial of the ugly side of the Open Science Movement. I was working primarily in health psychology/ behavioral health and witnessed the savaging of those of who of us made early requestes for release of data. Large behavioral trials with long follow ups cannot wait for replication. Armed with Koch money, which Stuart may have shared, Brian Nosek engaging in bullying those of us who saw him as needlessly bureaucratizing research when speedy removal of bad science from the literature would have been much more efficient and effective.
Brian did not support release of the PACE trial data, even though the investigators had promised to make it available as a condition of publishing in PLOS One.
The erasure of the PACE trial from Stuart seriously damages the credibility of his history.
Pre-registration was already a failure in behavioral medicine before Nosek promoted it in psychology.
See "Replication initiatives will not salvage the trustworthiness of psychology" https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-016-0134-3