Discussion about this post

User's avatar
Manjari Narayan's avatar

There is a lot of blind leading the blind in peer review on statistical issues. It is a challenging issue as many quantitatively savvy scientists sometimes advocate bad statistical ideas but they might generally be good on other quantitative issues in their discipline.

Great discussion on the problems with post-hoc power a few years ago when surgeons insisted that post-hoc power analysis was needed and vehemently defended it.

https://discourse.datamethods.org/t/observed-power-and-other-power-issues/731/13

Expand full comment
Katrien's avatar

Peer review made sense in the era of print journals when space limitations required selecting only a fraction of submitted articles. However, with the advent of the internet, its usefulness seems less clear to me. This example is just one among many that highlights its shortcomings.

Feedback and revisions are undoubtedly crucial, but relying on a small sample of five so-called experts to decide publication often results in arbitrary decisions. Moreover, the peer review process significantly slows scientific progress and tends to push researchers toward more conservative, non-controversial ideas and methodologies.

I’d advocate for a centralized platform where all work can be published, allowing the scientific community to assess its value through votes, comments, and ongoing feedback. This approach would foster transparency, encourage diverse perspectives, and accelerate the exchange of ideas.

Expand full comment
9 more comments...

No posts