دورية أكاديمية

Initial evidence of research quality of registered reports compared with the standard publishing model.

التفاصيل البيبلوغرافية
العنوان: Initial evidence of research quality of registered reports compared with the standard publishing model.
المؤلفون: Soderberg CK; Center for Open Science, Charlottesville, VA, USA., Errington TM; Center for Open Science, Charlottesville, VA, USA., Schiavone SR; Department of Psychology, University of California, Davis, Davis, CA, USA., Bottesini J; Department of Psychology, University of California, Davis, Davis, CA, USA., Thorn FS; School of Psychological Sciences, University of Melbourne, Melbourne, Victoria, Australia., Vazire S; Department of Psychology, University of California, Davis, Davis, CA, USA.; School of Psychological Sciences, University of Melbourne, Melbourne, Victoria, Australia., Esterling KM; Department of Political Science, University of California, Riverside, Riverside, CA, USA., Nosek BA; Center for Open Science, Charlottesville, VA, USA. nosek@cos.io.; Department of Psychology, University of Virginia, Charlottesville, VA, USA. nosek@cos.io.
المصدر: Nature human behaviour [Nat Hum Behav] 2021 Aug; Vol. 5 (8), pp. 990-997. Date of Electronic Publication: 2021 Jun 24.
نوع المنشور: Journal Article; Observational Study; Research Support, Non-U.S. Gov't; Research Support, U.S. Gov't, Non-P.H.S.
اللغة: English
بيانات الدورية: Publisher: Springer Nature Publishing Country of Publication: England NLM ID: 101697750 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 2397-3374 (Electronic) Linking ISSN: 23973374 NLM ISO Abbreviation: Nat Hum Behav Subsets: MEDLINE
أسماء مطبوعة: Original Publication: [London] : Springer Nature Publishing, [2017]-
مواضيع طبية MeSH: Peer Review, Research* , Registries*, Research/*standards, Data Analysis ; Humans ; Neurosciences ; Psychology ; Research Design/standards ; Research Report/standards
مستخلص: In registered reports (RRs), initial peer review and in-principle acceptance occur before knowing the research outcomes. This combats publication bias and distinguishes planned from unplanned research. How RRs could improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs such as reducing novelty. Here, 353 researchers peer reviewed a pair of papers from 29 published RRs from psychology and neuroscience and 57 non-RR comparison papers. RRs numerically outperformed comparison papers on all 19 criteria (mean difference 0.46, scale range -4 to +4) with effects ranging from RRs being statistically indistinguishable from comparison papers in novelty (0.13, 95% credible interval [-0.24, 0.49]) and creativity (0.22, [-0.14, 0.58]) to sizeable improvements in rigour of methodology (0.99, [0.62, 1.35]) and analysis (0.97, [0.60, 1.34]) and overall paper quality (0.66, [0.30, 1.02]). RRs could improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.
(© 2021. The Author(s), under exclusive licence to Springer Nature Limited.)
التعليقات: Comment in: Nat Hum Behav. 2021 Aug;5(8):978-979. (PMID: 34168322)
References: Chambers, C. What’s next for registered reports? Nature 573, 187–189 (2019). (PMID: 10.1038/d41586-019-02674-6)
Chambers, C. The registered reports revolution. Lessons in cultural reform. Significance 16, 23–27 (2019). (PMID: 10.1111/j.1740-9713.2019.01299.x)
Nosek, B. A. & Lakens, D. Registered reports: a method to increase the credibility of published results. Soc. Psychol. 45, 137–141 (2014). (PMID: 10.1027/1864-9335/a000192)
Nosek, B. A., Spies, J. R. & Motyl, M. Scientific utopia: II. restructuring incentives and practices to promote truth over publishability. Perspect. Psychol. Sci. 7, 615–631 (2012). (PMID: 10.1177/1745691612459058)
Smith, R. Peer review: a flawed process at the heart of science and journals. J. R. Soc. Med. 99, 178–182 (2006). (PMID: 10.1177/014107680609900414)
Fanelli, D. Negative results are disappearing from most disciplines and countries. Scientometrics 90, 891–904 (2012). (PMID: 10.1007/s11192-011-0494-7)
Fanelli, D. ‘Positive’ results increase down the hierarchy of the sciences. PLoS ONE 5, e10068 (2010). (PMID: 10.1371/journal.pone.0010068)
Franco, A., Malhotra, N. & Simonovits, G. Publication bias in the social sciences: unlocking the file drawer. Science 345, 1502–1505 (2014). (PMID: 10.1126/science.1255484)
Dickersin, K. The existence of publication bias and risk factors for its occurrence. JAMA 263, 1385–1389 (1990). (PMID: 10.1001/jama.1990.03440100097014)
Mahoney, M. J. Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cogn. Ther. Res. 1, 161–175 (1977). (PMID: 10.1007/BF01173636)
Greenwald, A. G. Consequences of prejudice against the null hypothesis. Psychol. Bull. 82, 1–20 (1975). (PMID: 10.1037/h0076157)
Sterling, T. D. Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J. Am. Stat. Assoc. 54, 30–34 (1959).
Makel, M. C., Plucker, J. A. & Hegarty, B. Replications in psychology research: How often do they really occur? Perspect. Psychol. Sci. 7, 537–542 (2012). (PMID: 10.1177/1745691612460688)
Schmidt, S. Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Rev. Gen. Psychol. 13, 90–100 (2009). (PMID: 10.1037/a0015108)
Makel, M. C. & Plucker, J. A. Facts are more important than novelty. Educ. Res. 43, 304–316 (2014). (PMID: 10.3102/0013189X14545513)
Schimmack, U. The ironic effect of significant results on the credibility of multiple-study articles. Psychol. Methods 17, 551–566 (2012). (PMID: 10.1037/a0029487)
Giner-Sorolla, R. Science or art? How aesthetic standards grease the way through the publication bottleneck but undermine science. Perspect. Psychol. Sci. 7, 562–571 (2012). (PMID: 10.1177/1745691612457576)
Begley, C. G. & Ellis, L. M. Raise standards for preclinical cancer research. Nature 483, 531–533 (2012). (PMID: 10.1038/483531a)
Prinz, F., Schlange, T. & Asadullah, K. Believe it or not: how much can we rely on published data on potential drug targets? Nat. Rev. Drug Discov. 10, 712–712 (2011). (PMID: 10.1038/nrd3439-c1)
Camerer, C. F. et al. Evaluating replicability of laboratory experiments in economics. Science 351, 1433–1436 (2016). (PMID: 10.1126/science.aaf0918)
Camerer, C. F. et al. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nat. Hum. Behav. 2, 637–644 (2018). (PMID: 10.1038/s41562-018-0399-z)
Klein, R. A. et al. Many Labs 2: investigating variation in replicability across samples and settings. Adv. Methods Pract. Psychol. Sci. 1, 443–490 (2018). (PMID: 10.1177/2515245918810225)
Klein, R. A. et al. Investigating variation in replicability: a ‘many labs’ replication project. Soc. Psychol. 45, 142–152 (2014). (PMID: 10.1027/1864-9335/a000178)
Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349, aac4716 (2015).
Ebersole, C. R. et al. Many Labs 3: evaluating participant pool quality across the academic semester via replication. J. Exp. Soc. Psychol. 67, 68–82 (2016). (PMID: 10.1016/j.jesp.2015.10.012)
Allen, C. & Mehler, D. M. A. Open science challenges, benefits and tips in early career and beyond. PLoS Biol. 17, e3000246 (2019). (PMID: 10.1371/journal.pbio.3000246)
Scheel, A. M., Schijen, M. & Lakens, D. An excess of positive results: comparing the standard psychology literature with registered reports. Preprint at PsyArXiv https://osf.io/p6e9c (2020).
Hummer, L. T., Singleton Thorn, F., Nosek, B. A. & Errington, T. M. Evaluating registered reports: a naturalistic comparative study of article impact. Preprint at OSF https://osf.io/5y8w7 (2017).
Cropley, A. Research as artisanship versus research as generation of novelty: the march to nowhere. Creat. Res. J 30, 323–328 (2018).
Baumeister, R. F. Charting the future of social psychology on stormy seas: winners, losers, and recommendations. J. Exp. Soc. Psychol. 66, 153–158 (2016). (PMID: 10.1016/j.jesp.2016.02.003)
Nosek, B. A. & Errington, T. M. The best time to argue about what a replication means? Before you do it. Nature 583, 518–520 (2020). (PMID: 10.1038/d41586-020-02142-6)
Gelman, A., Hill, J. & Yajima, M. Why we (usually) don’t have to worry about multiple comparisons. J. Res. Educ. Eff. 5, 189–211 (2012).
Epskamp, S. & Nuijten, M. B. statcheck: extract statistics from articles and recompute P values. R package version 1.3.1 (2018).
Hardwicke, T. E. & Ioannidis, J. P. A. Mapping the universe of registered reports. Nat. Hum. Behav. 2, 793–796 (2018). (PMID: 10.1038/s41562-018-0444-y)
Chambers, C. D. & Mellor, D. T. Protocol transparency is vital for registered reports. Nat. Hum. Behav. 2, 791–792 (2018). (PMID: 10.1038/s41562-018-0449-6)
John, L. K., Loewenstein, G. & Prelec, D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23, 524–532 (2012). (PMID: 10.1177/0956797611430953)
Simmons, J. P., Nelson, L. D. & Simonsohn, U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22, 1359–1366 (2011). (PMID: 10.1177/0956797611417632)
Nuijten, M. B., van Assen, M. A. L. M., Hartgerink, C. H. J., Epskamp, S. & Wicherts, J. M. The validity of the tool ‘statcheck’ in discovering statistical reporting inconsistencies. Preprint at PsyArXiv https://osf.io/tcxaj (2017).
Stan Development Team. Stan Modeling Language Users Guide and Reference Manual (2020).
تواريخ الأحداث: Date Created: 20210625 Date Completed: 20210915 Latest Revision: 20220219
رمز التحديث: 20221213
DOI: 10.1038/s41562-021-01142-4
PMID: 34168323
قاعدة البيانات: MEDLINE
الوصف
تدمد:2397-3374
DOI:10.1038/s41562-021-01142-4