6 Comments

There is a lot of blind leading the blind in peer review on statistical issues. It is a challenging issue as many quantitatively savvy scientists sometimes advocate bad statistical ideas but they might generally be good on other quantitative issues in their discipline.

Great discussion on the problems with post-hoc power a few years ago when surgeons insisted that post-hoc power analysis was needed and vehemently defended it.

https://discourse.datamethods.org/t/observed-power-and-other-power-issues/731/13

Expand full comment

Peer review made sense in the era of print journals when space limitations required selecting only a fraction of submitted articles. However, with the advent of the internet, its usefulness seems less clear to me. This example is just one among many that highlights its shortcomings.

Feedback and revisions are undoubtedly crucial, but relying on a small sample of five so-called experts to decide publication often results in arbitrary decisions. Moreover, the peer review process significantly slows scientific progress and tends to push researchers toward more conservative, non-controversial ideas and methodologies.

I’d advocate for a centralized platform where all work can be published, allowing the scientific community to assess its value through votes, comments, and ongoing feedback. This approach would foster transparency, encourage diverse perspectives, and accelerate the exchange of ideas.

Expand full comment

Of course, after having this pointed out, they agreed to look at it again.

I hope?

Expand full comment

yeah well, I once wrote a fable for a collection of fables only to be told that because my fable ( I researched what fables actually were ---FAIL) did not have character development it would be a no. The whole point of fables is that there is no character development, the animals are ciphers for human foibles.... sheesh. They were wanted something they were literally not asking for... MOral of this story is, do not give them what they ask for, do not do your own research, find out about the frameworks the requesters are assuming without any conscious intentional inquiry on their part, and give them that. Easy Peasy.

Expand full comment

I have a stats PhD (though I admit I've worked mostly in AI/ML since graduating a while ago) but I still don't understand why post hoc while not preferred for clarity of communication isn't totally feasible. I can easily imagine doing the exact same simulations with the same assumptions I would've made a priori and using those to calculate relevant statistics that give you an idea of how likely your result is a true or false positive/negative. You'd have to be careful and set up a complicated simulation with sound logic about what you can actually claim and what your assumptions about potential sampling distributions mean. And you def can't just use the empirical estimates of the moments from the study. But it seems false to me that you can't gather any additional information to better understand your result by doing post hoc analysis, simulation, and reasoning?

Expand full comment