> Research output is reliable where it is actively relied on by engineers -- and not elsewhere.
I'd word it this way: if the person producing the output is not responsible for it really working, it almost certainly won't. Even innocently this will be the case with anything complex, people get things backwards, miss a scale factor, etc. Finding that last bug can take more work than the rest of the project combined, much easier to publish what appears to work and move on. Much more so when there's direct career benefits to "hacking" the system over competing honestly. Especially considering the internet is awash with people trying to cheat through every other competition (exam questions, interview questions, etc.)
> if the person producing the output is not responsible for it really working, it almost certainly won't. Even innocently this will be the case with anything complex
Indeed, there's a great example in the article itself, in a totally unrelated area:
> I felt these journals generally did their best, and the slowness of the process likely comes from the bureaucracy of the process and the inexperience editors have with that process.
In other words, these reasonable journals weren't able to use their retraction process even though they wanted to, because the process never gets used and therefore isn't in a usable state.
> these reasonable journals weren't able to use their retraction process even though they wanted to, because the process never gets used and therefore isn't in a usable state.
Actually, this made me think about a journal purposefully running intentionally spurious papers, with a challenge to the readership to identify which paper was fake. If the system worked, that would cause every paper published in the journal to be investigated adversarially.
The obvious problem with the idea seems to be that so much of the process is voluntary; people might be unwilling to submit papers to that journal.
The government pays bounties to whistleblowers who expose grant fraud under the False Claims act, along with big fines for the perpetrators. Not sure how much it extends to research fraud itself, but it certainly seems like something they should do. Perhaps even they might extend it to publishing stuff that can't be reproduced.
I'd word it this way: if the person producing the output is not responsible for it really working, it almost certainly won't. Even innocently this will be the case with anything complex, people get things backwards, miss a scale factor, etc. Finding that last bug can take more work than the rest of the project combined, much easier to publish what appears to work and move on. Much more so when there's direct career benefits to "hacking" the system over competing honestly. Especially considering the internet is awash with people trying to cheat through every other competition (exam questions, interview questions, etc.)