You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Quality is a very complicated concept, and in the context of academic work, is very challenging to measure. To start with, it should be clarified what object is being considered, which could range from data to peer review. In most such cases, quality cannot be defined on the basis of any easily measurable data, and rather require some form of manual assessment.
31
+
32
+
For the most traditional academic output, a scholarly publication, such a manual assessment is typically provided through peer review [@bornmann_scientific_2011]. Peer review is much discussed in science studies, and there are discussions about its reliability [@cole_chance_1981] and its biases [@lee_bias_2013], but also about its positive effects [@goodman_manuscript_1994] and complementaries [@goyal_causal_2024].
33
+
34
+
Quality is typically considered to be a multidimensional concept [@aksnes2019], composed of various other concepts. For instance, in peer review of manuscripts submitted to journals, it is common to assess the novelty and the rigour of the manuscript. Yet even if quality is considered a multidimensional concept, in practice, quality is sometimes still considered to be unidimensional. For example, in the [UK REF](https://www.ref.ac.uk/) research articles are assigned a number of stars, varying from "recognised nationally" (1 star) to "world-leading" (4 stars).
35
+
36
+
In the context of exercises such as the UK REF there have also been discussions about the possibility to use citations as a proxy for quality. Indeed, there are substantial correlations between peer review results and citations, but this depends on the level of aggregation. At the individual paper level the correlation is typically low, yet at higher levels, such as institutional, the correlations are substantially higher [@traag_metrics_2023]. Overall, as summarised in the reputable "Metrics Tide" report [@wilsdon_metric_2015, viii], "Metrics should support, not supplant, expert judgement.", and this is particularly relevant at the individual paper level.
Copy file name to clipboardExpand all lines: references.bib
+93-5Lines changed: 93 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -152,6 +152,19 @@ @book{bellis2009
152
152
langid = {en}
153
153
}
154
154
155
+
@article{bornmann_scientific_2011,
156
+
title = {Scientific peer review},
157
+
volume = {45},
158
+
issn = {00664200},
159
+
doi = {10.1002/aris.2011.1440450112},
160
+
number = {1},
161
+
urldate = {2023-03-14},
162
+
journal = {Annual Review of Information Science and Technology},
163
+
author = {Bornmann, Lutz},
164
+
year = {2011},
165
+
pages = {197--245}
166
+
}
167
+
155
168
@article{bornmann2016,
156
169
title = {Normalization of Mendeley reader impact on the reader-and paper-side: A comparison of the mean discipline normalized reader score (MDNRS) with the mean normalized reader score (MNRS) and bare reader counts},
title = {Manuscript {Quality} before and after {Peer} {Review} and {Editing} at {Annals} of {Internal} {Medicine}},
881
+
volume = {121},
882
+
issn = {0003-4819},
883
+
doi = {10.7326/0003-4819-121-1-199407010-00003},
884
+
number = {1},
885
+
journal = {Ann. Intern. Med.},
886
+
author = {Goodman, Steven N and Berlin, Jesse and Fletcher, Suzanne W and Fletcher, Robert H},
887
+
month = jul,
888
+
year = {1994},
889
+
pages = {11}
890
+
}
891
+
853
892
@article{goodman2016,
854
893
title = {What does research reproducibility mean?},
855
894
author = {Goodman, Steven N. and Fanelli, Daniele and Ioannidis, John P. A.},
@@ -880,6 +919,17 @@ @article{gordon2021
880
919
langid = {en}
881
920
}
882
921
922
+
@misc{goyal_causal_2024,
923
+
title = {Causal {Effect} of {Group} {Diversity} on {Redundancy} and {Coverage} in {Peer}-{Reviewing}},
924
+
doi = {10.48550/arXiv.2411.11437},
925
+
abstract = {A large host of scientific journals and conferences solicit peer reviews from multiple reviewers for the same submission, aiming to gather a broader range of perspectives and mitigate individual biases. In this work, we reflect on the role of diversity in the slate of reviewers assigned to evaluate a submitted paper as a factor in diversifying perspectives and improving the utility of the peer-review process. We propose two measures for assessing review utility: review coverage—reviews should cover most contents of the paper—and review redundancy—reviews should add information not already present in other reviews. We hypothesize that reviews from diverse reviewers will exhibit high coverage and low redundancy. We conduct a causal study of different measures of reviewer diversity on review coverage and redundancy using observational data from a peer-reviewed conference with approximately 5,000 submitted papers. Our study reveals disparate effects of different diversity measures on review coverage and redundancy. Our study finds that assigning a group of reviewers that are topically diverse, have different seniority levels, or have distinct publication networks leads to broader coverage of the paper or review criteria, but we find no evidence of an increase in coverage for reviewer slates with reviewers from diverse organizations or geographical locations. Reviewers from different organizations, seniority levels, topics, or publications networks (all except geographical diversity) lead to a decrease in redundancy in reviews. Furthermore, publication network-based diversity alone also helps bring in varying perspectives (that is, low redundancy), even within specific review criteria. Our study adopts a group decision-making perspective for reviewer assignments in peer review and suggests dimensions of diversity that can help guide the reviewer assignment process.},
926
+
language = {en},
927
+
publisher = {arXiv},
928
+
author = {Goyal, Navita and Stelmakh, Ivan and Shah, Nihar and III, Hal Daumé},
929
+
month = nov,
930
+
year = {2024}
931
+
}
932
+
883
933
@article{grimme,
884
934
title = {The State of Open Monographs},
885
935
author = {Grimme, Sara and Holland, Cathy and Potter, Peter and Taylor, Mike and Watkinson, Charles},
@@ -1003,6 +1053,7 @@ @misc{hunermund_causal_2023
1003
1053
year = {2023}
1004
1054
}
1005
1055
1056
+
1006
1057
@inproceedings{hunter2015,
1007
1058
title = {Formal Acknowledgement of Citizen Scientists{\textquoteright} Contributions via Dynamic Data Citations},
title = {Open Access uptake by universities worldwide},
2015
2079
author = {Robinson-Garcia, Nicolas and Costas, Rodrigo and van Leeuwen, Thed N.},
@@ -2273,6 +2337,15 @@ @misc{traag_causal_2022
2273
2337
year = {2022}
2274
2338
}
2275
2339
2340
+
@misc{traag_metrics_2023,
2341
+
title = {Metrics and peer review agreement at the institutional level},
2342
+
doi = {10.48550/arXiv.2006.14830},
2343
+
publisher = {arXiv},
2344
+
author = {Traag, V. A. and Malgarini, M. and Sarlo, S.},
2345
+
month = mar,
2346
+
year = {2023}
2347
+
}
2348
+
2276
2349
@article{traag2021,
2277
2350
title = {Inferring the causal effect of journals on citations},
2278
2351
author = {Traag, V. A.},
@@ -2384,6 +2457,7 @@ @article{wang2020
2384
2457
note = {Publisher: MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info {\ldots}}
2385
2458
}
2386
2459
2460
+
2387
2461
@article{westreich2013,
2388
2462
title = {The Table 2 Fallacy: Presenting and Interpreting Confounder and Modifier Coefficients},
2389
2463
author = {Westreich, Daniel and Greenland, Sander},
@@ -2404,6 +2478,7 @@ @misc{whatper
2404
2478
langid = {en}
2405
2479
}
2406
2480
2481
+
2407
2482
@article{wilkinson2016,
2408
2483
title = {The FAIR Guiding Principles for scientific data management and stewardship},
2409
2484
author = {Wilkinson, Mark D. and Dumontier, Michel and Aalbersberg, IJsbrand Jan and Appleton, Gabrielle and Axton, Myles and Baak, Arie and Blomberg, Niklas and Boiten, Jan-Willem and da Silva Santos, Luiz Bonino and Bourne, Philip E. and Bouwman, Jildau and Brookes, Anthony J. and Clark, Tim and Crosas, {Mercè} and Dillo, Ingrid and Dumon, Olivier and Edmunds, Scott and Evelo, Chris T. and Finkers, Richard and Gonzalez-Beltran, Alejandra and Gray, Alasdair J. G. and Groth, Paul and Goble, Carole and Grethe, Jeffrey S. and Heringa, Jaap and {{\textquoteright}t Hoen}, Peter A. C. and Hooft, Rob and Kuhn, Tobias and Kok, Ruben and Kok, Joost and Lusher, Scott J. and Martone, Maryann E. and Mons, Albert and Packer, Abel L. and Persson, Bengt and Rocca-Serra, Philippe and Roos, Marco and van Schaik, Rene and Sansone, Susanna-Assunta and Schultes, Erik and Sengstag, Thierry and Slater, Ted and Strawn, George and Swertz, Morris A. and Thompson, Mark and van der Lei, Johan and van Mulligen, Erik and Velterop, Jan and Waagmeester, Andra and Wittenburg, Peter and Wolstencroft, Katherine and Zhao, Jun and Mons, Barend},
@@ -2443,6 +2518,17 @@ @article{wilner
2443
2518
langid = {en-us}
2444
2519
}
2445
2520
2521
+
2522
+
@techreport{wilsdon_metric_2015,
2523
+
title = {Metric {Tide}: {Report} of the {Independent} {Review} of the {Role} of {Metrics} in {Research} {Assessment} and {Management}},
2524
+
institution = {Higher Education Funding Council for England},
2525
+
author = {Wilsdon, James and Allen, Liz and Belfiore, Eleonora and Campbell, Philip and Curry, Stephen and Hill, Steven and Jones, Richard and Kain, Roger and Kerridge, Simon and Thelwall, Mike and Tinkler, Jane and Viney, Ian and Wouters, Paul and Hill, Jude and Johnson, Ben},
2526
+
year = {2015},
2527
+
doi = {10.13140/RG.2.1.4929.1363},
2528
+
pages = {163}
2529
+
}
2530
+
2531
+
2446
2532
@book{wood2021,
2447
2533
title = {CORVIDS},
2448
2534
author = {Wood, Katherine},
@@ -2482,6 +2568,7 @@ @article{wuchty2007
2482
2568
langid = {en}
2483
2569
}
2484
2570
2571
+
2485
2572
@article{yarkoni2019,
2486
2573
title = {The Generalizability Crisis},
2487
2574
author = {Yarkoni, Tal},
@@ -2492,7 +2579,6 @@ @article{yarkoni2019
2492
2579
url = {https://psyarxiv.com/jqw35/}
2493
2580
}
2494
2581
2495
-
2496
2582
@article{zahedi2017,
2497
2583
title = {Mendeley readership as a filtering tool to identify highly cited publications},
2498
2584
author = {Zahedi, Zohreh and Costas, Rodrigo and Wouters, Paul},
@@ -2507,6 +2593,8 @@ @article{zahedi2017
2507
2593
note = {Publisher: Wiley}
2508
2594
}
2509
2595
2596
+
2597
+
2510
2598
@article{zahedi2020,
2511
2599
title = {Do Online Readerships Offer Useful Assessment Tools? Discussion Around the Practical Applications of Mendeley Readership for Scholarly Assessment},
0 commit comments