Two exciting opportunities in the works:
First, the National Institute of Neurological Disorders and Stroke (NINDS) released a “Request for Information on Potential Solutions for Reducing Publication Bias Against Null Studies." Responses are due on April 1, 2024.
As we’ve said any number of times, this is one of the most central issues for improving science. When scientists feel that they can’t publish null results (or get funded in the future without posittve results), they face an existential incentive both to:
1) study marginal, incremental questions where the answer is all-but-already-known, and/or
2) fudge the results (whether through p-hacking or outright fraud).
On the flip side, if scientists are more free to report “failures” and null results, they will be more likely to tackle larger and more innovative questions (where the answer is uncertain), and to tell the truth about their results.
In other words, prioritizing null results is probably the single biggest thing that could improve both reproducibility and innovation in one fell swoop.
NIH is currently seeking advice and comments on:
The most significant barriers to addressing publication bias.
How dissemination of null studies could be incentivized.
Potential solutions to publication bias across biomedical research fields.
My quick and easy answer: Ask each NIH program officer and study section to report back on how many “successes” they funded. Penalize any program officer, study section, or even grantee who reports a success rate that is too high (above 75% or perhaps 50%).
That is, rather than incentivizing people to fund/study incremental questions and lie about the results, incentivize them to do more innovative work and tell the truth about it.
***
Second, NIH’s Office of Data Science Strategy has released a “Request for Information Inviting Comments on the NIH Strategic Plan for Data Science 2023-2028.” Comments are due March 15, 2024.
The draft strategic plan seems very thoughtful and thorough. It highlights the many ways in which NIH wants to generate more data that is FAIR (findable, accessible, interoperable, and reusable), to find ways to encourage biomedical data repositories, to use data from electronic health records, to promote innovative AI use cases that “reduce bias and risks,” and to explore new technology across biomedical research.
It’s good to see that NIH is engaging both with generalist repositories (such as Open Science Framework) and with specialized data standards (such as SNOMED, LOINC, and FHIR). The NIH’s effort here is doing great work that should be applauded.
At the same time, not all data, and not all data-related activities, are equal. It would be great to have a better sense of prioritization: of all the 1,000 things NIH could do with regard to biomedical data, what are the top three things that would be likely to lead to improvements in human health? Conversely, what are the activities that might be a lot of sound and fury, with little actual benefit? More importantly, how are we measuring results?
As of now, the “accomplishments” (see p. 37) are defined in terms of petabytes shared, number of funding opportunities, number of partnerships, number of awards, number of fellows, number of code-a-thons, etc., etc. These are all useful to track as intermediate outputs . . . but they might have no ultimate impact on the advancement of science or the improvement of human health.
That’s what the open science movement needs more than anything: a way to show that all of the intermediate outputs actually lead to downstream improvements in scientific advancement and human health.
Anyway, the NIH is doing great work here. Send in your comments.
A “strategic plan” means that the NIH consciously moves science in their preferred direction. That’s different than just reviewing grant applications and supporting those that seem promising. The latter allows for a wider range of study. The former by design narrows the range. Which should we expect will produce better results?