Non-Standard Errors
Menkveld, Albert; Dreber, Anna; Holzmeister, Felix; Huber, Juergen; Johannesson, Magnus; Kirchler, Michael; Neusus, Sebastian; Razen, Michael; Weitzel, Utz; Abad-Diaz, David; Abudy, Menachem; Adrian, Tobias; Ait-Sahalia, Yacine; Akmansoy, Olivier; Alcock, Jamie T.; Alexeev, Vitali; Aloosh, Arash; Amato, Livia; Amaya, Diego; Angel, James J.; Avetikian, Alejandro T.; Bach, Amadeus; Baidoo, Edwin; Bakalli, Gaetan; Bao, Li; Bardon, Andrea; Bashchenko, Oksana; Bindra, Parampreet C.; Bjønnes, Geir Høidal; Black, Jeffrey R.; Black, Bernard S.; Bogoev, Dimitar; Correa, Santiago Bohorquez; Bondarenko, Oleg; Bos, Charles S.; Bosch-Rosa, Ciril; Bouri, Elie; Brownlees, Christian; Calamia, Anna; Cao, Viet Nga; Capelle-Blancard, Gunther; Romero, Laura M. Capera; Caporin, Massimiliano; Carrion, Allen; Caskurlu, Tolga; Chakrabarty, Bidisha; Chen, Jian; Chernov, Mikhail; Cheung, William; ter Ellen, Saskia; Ødegaard, Bernt Arne; Longarela, Inaki Rodriguez; Wika, Hans C.; Yuferova, Darya
Peer reviewed, Journal article
Published version

View/ Open
Date
2024Metadata
Show full item recordCollections
Original version
10.1111/jofi.13337Abstract
In statistics, samples are drawn from a population in a data‐generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence‐generating process (EGP). We claim that EGP variation across researchers adds uncertainty—nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer‐review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.