What you did should indeed be basic protocol for anyone who sees results like this -- something needs to change in econ culture. Thank you for the service you've done for our discipline.
I have a general feeling that asking a suspicious sounding question ought to be cheerfully accepted by anyone prepared to answer it. In other words, we ought to expect skepticism for anything we say, and skepticism (even if ultimately proved to be wrong) ought to be valued and not seen through a social or emotional lens. The burden ought to be on the speaker or writer to back up their claims, and asking people to back up their claims ought to carry no stigma at all, even if the skepticism turns out to have been unfounded.
It seems especially essential in any scientific field, and it’s embarrassing to me to discover that any academic field would not be typically using the logic I’m using.
On a related note, I read a very disturbing essay on here claiming that the science journal peer review process is often less rigorous than we might suppose it to be. I’ll try to find this article—
Rather than relying on individual's to make accusations of fraud, it would seem better to strengthen the norm that data must be made public. If there are good reasons for confidentiality, at least provide suitably restricted access to trusted third parties.
Worth observing that outright fraud like this remains exceptionally rare as far as can be determined. Still much bigger problems with bad statistical practice like p-hacking. Data repositories and pre-registration help here also.
I largely agree, although I suspect that fraud is much more common than we know. The fraud cases we know of are, in retrospect, very clumsy. What about someone who secretly switches some cases from the treatment to the control group (or vice versa) to make a clinical trial come out the "right" way? We'd probably never detect that.
I had a brief career as a scientific fraudster in high school. I was hopeless at experiments so I just worked out the correct results, added what I hoped was a suitable margin of error and handed them in.
Replication is the only real check. If a result is exciting (like cold fusion) people will try to replicate it, and if it is fraudulent or the result of bad statistical practice replication will fail.
I suspect that the vast bulk of undetected fraud is people pretending to do work that is just publishable but not interesting enough to get closely examined or replicated. That's more or less harmless - it just means some lazy people get jobs they don't deserve.
Not sure when this social norm of letting stuff one doesnt trust just go for fear of being rude began but it is insidious. It is also fed by the concern that there is some vast antiscience movement just waiting to play gotcha. I have such fond memories of my postdoc years because the crusty old professors would really lean into to the grad students and postdocs. It was not bullying. It was - if there is something wrong here (honest mistakes, inadequate controls, wrong data analysis tools) better it be found amongst the family than before strangers. It was the deepest display of investment and caring. Letting things slide when you have concerns strikes me as a display of nihilism. It is a sign that one just doesnt care enough about what is true
How much of this is a lack of quality peer review? There have been a few examples in STEM fields where issues popped up about a paper, and those issues should have been seen during peer review. Some people on a blog shouldn't be the first ones to raise issues with the data in a paper, for example. Sadly, it happens.
What you did should indeed be basic protocol for anyone who sees results like this -- something needs to change in econ culture. Thank you for the service you've done for our discipline.
I have a general feeling that asking a suspicious sounding question ought to be cheerfully accepted by anyone prepared to answer it. In other words, we ought to expect skepticism for anything we say, and skepticism (even if ultimately proved to be wrong) ought to be valued and not seen through a social or emotional lens. The burden ought to be on the speaker or writer to back up their claims, and asking people to back up their claims ought to carry no stigma at all, even if the skepticism turns out to have been unfounded.
It seems especially essential in any scientific field, and it’s embarrassing to me to discover that any academic field would not be typically using the logic I’m using.
Too bloody right, exactly
On a related note, I read a very disturbing essay on here claiming that the science journal peer review process is often less rigorous than we might suppose it to be. I’ll try to find this article—
Rather than relying on individual's to make accusations of fraud, it would seem better to strengthen the norm that data must be made public. If there are good reasons for confidentiality, at least provide suitably restricted access to trusted third parties.
Worth observing that outright fraud like this remains exceptionally rare as far as can be determined. Still much bigger problems with bad statistical practice like p-hacking. Data repositories and pre-registration help here also.
I largely agree, although I suspect that fraud is much more common than we know. The fraud cases we know of are, in retrospect, very clumsy. What about someone who secretly switches some cases from the treatment to the control group (or vice versa) to make a clinical trial come out the "right" way? We'd probably never detect that.
I had a brief career as a scientific fraudster in high school. I was hopeless at experiments so I just worked out the correct results, added what I hoped was a suitable margin of error and handed them in.
Replication is the only real check. If a result is exciting (like cold fusion) people will try to replicate it, and if it is fraudulent or the result of bad statistical practice replication will fail.
I suspect that the vast bulk of undetected fraud is people pretending to do work that is just publishable but not interesting enough to get closely examined or replicated. That's more or less harmless - it just means some lazy people get jobs they don't deserve.
Not sure when this social norm of letting stuff one doesnt trust just go for fear of being rude began but it is insidious. It is also fed by the concern that there is some vast antiscience movement just waiting to play gotcha. I have such fond memories of my postdoc years because the crusty old professors would really lean into to the grad students and postdocs. It was not bullying. It was - if there is something wrong here (honest mistakes, inadequate controls, wrong data analysis tools) better it be found amongst the family than before strangers. It was the deepest display of investment and caring. Letting things slide when you have concerns strikes me as a display of nihilism. It is a sign that one just doesnt care enough about what is true
How much of this is a lack of quality peer review? There have been a few examples in STEM fields where issues popped up about a paper, and those issues should have been seen during peer review. Some people on a blog shouldn't be the first ones to raise issues with the data in a paper, for example. Sadly, it happens.