Two promising items in NIH data news recently:
As described in this post from my friends Mike Lauer and Tara Schwetz at NIH, the NIH is creating a “Science of Science Scholars Program Pilot.” What does this mean, you might ask? The NIH is planning to do more to offer scholars access to internal administrative data, so that they can answer questions such as:
How can NIH assess the economic impact across its portfolios, including contributions to treatments and interventions?
What methods can better predict and identify scientific opportunities?
How can NIH determine the effectiveness of NIH-wide policies such as the Next Generation Researchers Initiative to support a sustainable biomedical workforce?
What measures can capture impactful scientific strategies, and how can these be scaled to support more breakthrough research?
How can NIH leverage its administrative data and publication data to assess the rigor of NIH-funded research and/or evaluate policies that aim to enhance the rigor and reproducibility of NIH-funded research?
This is all great news! That said, two caveats on my part:
First, as I was told by someone at NIH, they are starting with only two scholars, although they may expand later. I know many more than two scholars off the top of my head who would be eager to get access to NIH’s internal data, so this “pilot” program should indeed be rapidly expanded.
Second, I have heard more than once that on the past occasions when NIH has allowed access to internal data, it reserved the right to veto publications when it didn’t like the results, and has in fact exercised that right on multiple occasions.
This . . . isn’t ideal. Agencies should not be allowed to veto independent evaluations except in (rare) cases of gross incompetence. There’s no point to evaluating government agencies/programs if the only results that see the light of day are the ones with a positive result. [Same goes for science in general, which is why publication bias is such an issue.]
Evaluating individual scientists by metrics like the H-index (which measures how many articles you’ve published with a given number of citations) is inherently fraught with difficulty. None of the metrics I’ve seen to date seem robust to gaming, nor do they seem to measure what we would actually want out of 100% of scientists.
That said, NIH (led by the National Eye Institute) is trying to create a new metric to evaluate how much someone has contributed to data sharing. The idea is tentatively called the S-index, with “S” standing for “sharing,” and the goal is to reward people for contributing to the broader scientific community rather than just their own CV.
Using prize authority (thanks Tom Kalil!), the NIH is offering a total of $1 million in prizes for ideas about how to create, define, and implement a data-sharing metric:
Challenge Award/Prize Amount(s): The total prize purse for this Challenge is $1 million.
• Phase 1: Up to six winners will be awarded $15,000 each. These winners will be designated as finalists and will be eligible to compete in Phase 2. Finalists are encouraged to use their Phase 1 prize money to travel to Bethesda, MD, for the NIH S-index Innovation Event, where they will showcase their final solutions. Attendance at this event is mandatory to be eligible for Phase 2 prizes. Up to four honorable mentions will be awarded $2,500 each, recognizing their efforts to advance data sharing.
• Phase 2: The remaining prize purse will be awarded to up to three winners based on their final submissions. Awards will be allocated as follows:
§ First Prize: $500,000
§ Second Prize: $300,000
§ Third Prize: $100,000
You can register for the challenge here.