You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In general, productivity estimates the amount of output relative to the amount of input. In the context of academia, outputs can be various objects, varying from publications to data, code, or peer reviews. Although productivity is an aspect of interest, it should usually be considered jointly with something like quality. That is, a higher productivity may just stimulate more, but lower quality, outputs. There is some evidence of such a type of effect [@butler_explaining_2003], although this evidence is also disputed [@van_den_besselaar_perverse_2017].
31
+
32
+
Output is usually only measured for a limited set of objects, with scholarly publications being the most typical example. Nonetheless, other relevant outputs should not be ignored, and limitations of productivity based on publications should be considered. Moreover, we should be aware of certain potential differences between productivity at the individual level and the collective level. For instance, consider a research group for which one individual is tasked with data quality assurance and code review. That individual might perhaps have a lower productivity in terms of publication outputs, yet her/his activities are a boon to the other researchers in the group, whose productivity might greatly increase as a result [@tiokhin_shifting_2023].
33
+
34
+
In addition, one aspect of productivity that is usually missing is the overall input [@abramo_farewell_2016]. That is, we typically do not know how many people are employed at a certain institution. Even if part of that becomes visible in authorships, not every employee's contribution will become visible in authorship. Hence, institutions that have for example more research assistants who are not acknowledges as author may seem to have relatively few authors, but in reality there are much more people active at the institution. Moreover, even if we know whether a particular author as affiliated with a certain institution, we do not know the amount of time (s)he spends at that affiliation, which is particularly challenging with multiple affiliations. Going one step further, the input could also be specified in financial terms. Unfortunately, none of this data is typically available [@waltman_elephant_2016]. Nonetheless, this is an important limitation to taken into account when considering productivity.
35
+
36
+
### Avg. number of papers per author
37
+
38
+
#### Measurement
39
+
40
+
For a certain institutions $i$ we can count how many authors $a_i$ are affiliated with institution $i$ and how many publications $n_i$ are published in a given year $y$. The ratio of $\frac{n_i}{a_i}$ then gives the average number of papers per author, which is an indicator of productivity. We typically observe an increase in productivity over time, such that in more recent years, the number of papers per author is usually larger than in earlier years.
41
+
42
+
One relevant aspect in the context of counting number of papers per author is the increase in collaboration. If the total amount of publications remains the same in a given year, but more of them are co-authored, then the metric will be higher. Hence, it sometimes makes sense to use "fractional counting" for publications [@waltman2015]. This means that we can consider fractions, or weights, for all publications, based on the "fraction" of their authorship. For instance, if a publication has three authors: each has a fraction of 1/3. If two of the authors are affiliated with a single institution, say institution A, that institution will have a weight of 2/3. If, in addition, the third author would have two affiliations, one with the aforementioned institution A, and one with institution B, we could count that author as belonging to institution A for 1/2, bringing the total to 5/6.
43
+
44
+
If we indicate $n_{ji}$ the fraction to which publication $j$ belongs to institution $i$, we can define $n'_i = \sum_j w_{ji}$ the number of fractionally counted publications. Similarly, if we indicate with $a_{ji}$ the fraction with which author $j$ belongs to institution $i$, we can define the fractionally counted number of authors as $a'_{i} = \sum_j a_{ji}$. The productivity can then be simply specified as $\frac{n'_i}{a'_i}$.
45
+
46
+
If there is input data available, such that the total amount of budget of fte available is indicated by $f_i$, the average number of publications per currency unit or fte can be expressed as $\frac{n_i}{f_i}$.
47
+
48
+
## Datasources
49
+
50
+
### OpenAlex
51
+
52
+
[OpenAlex](https://openalex.org/) covers publications based on previously gathered data from Microsoft Academic Graph, but mostly relies on Crossref to index new publications. OpenAlex offers a user interface that is at the moment still under active development, an open API, and the possibility to download the entire data snapshot. The API is rate-limited, but there are options of having a premium account. Documentation for the API is available at <https://docs.openalex.org/>.
53
+
54
+
It is possible to retrieve the number of authors for a particular publication in OpenAlex, for example by using a third-party package for Python called `pyalex`.
Based on this type of data, the above-mentioned metrics can be calculated. When large amounts of data need to be processed, it is recommended to download the full [data snapshot](https://docs.openalex.org/download-all-data/snapshot-data-format), and work with it directly.
67
+
68
+
OpenAlex provides disambiguated authors, institutes and countries. The institutions are matched to [Research Organization Registry (ROR)](https://ror.org/), the countries might be available, even if no specific institution is available.
69
+
70
+
### Dimensions
71
+
72
+
[Dimensions](https://app.dimensions.ai/discover/publication) is a bibliometric database that takes a comprehensive approach to indexing publications. It offers limited free access through its user interface. API access and access through its database via Google BigQuery can be arranged through payments. It also offers the possibility to apply for access to the API and/or Google BigQuery for [research purposes](https://www.dimensions.ai/request-access/). The API is documented at <https://docs.dimensions.ai/dsl>.
73
+
74
+
The database is closed access, and we therefore do not provide more details about API usage.
75
+
76
+
### Scopus
77
+
78
+
[Scopus](https://www.scopus.com/) is a bibliometric database with a relatively broad coverage. Its data is closed and is generally available only through a paid subscription. It does offer the possibility to apply for access for research purposes through the [ICSR Lab](https://www.elsevier.com/insights/icsr/lab). Some additional documentation of their metrics is available at <https://www.elsevier.com/products/scopus/metrics>, in particular in the Research Metrics Guidebook, with documentation for the dataset available through ICSR Lab being available separately.
79
+
80
+
The database is closed access, and we therefore do not provide more details about API usage.
81
+
82
+
### Web of Science
83
+
84
+
[Web of Science](https://webofscience.com/) is a bibliometric database that takes a more selective approach to indexing publications. Its data is closed and is only through a paid subscription.
85
+
86
+
The database is closed access, and we therefore do not provide more details about API usage.
title = {A farewell to the {MNCS} and like size-independent indicators},
17
+
volume = {10},
18
+
issn = {1751-1577},
19
+
doi = {10.1016/j.joi.2016.04.006},
20
+
abstract = {The arguments presented demonstrate that the Mean Normalized Citation Score (MNCS) and other size-independent indicators based on the ratio to publications are not indicators of research performance. The article provides examples of the distortions when rankings by MNCS are compared to those based on indicators of productivity. The authors propose recommendations for the scientometric community to switch to ranking by research efficiency, instead of MNCS and other size-independent indicators.},
21
+
number = {2},
22
+
journal = {Journal of Informetrics},
23
+
author = {Abramo, Giovanni and D'Angelo, Ciriaco Andrea},
24
+
month = may,
25
+
year = {2016},
26
+
pages = {646--651}
27
+
}
28
+
15
29
@article{aksnes2019,
16
30
title = {Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories},
17
31
author = {Aksnes, Dag W. and Langfeldt, Liv and Wouters, Paul},
@@ -238,6 +252,7 @@ @techreport{brown2016
238
252
langid = {en}
239
253
}
240
254
255
+
241
256
@article{bryan2021,
242
257
title = {The impact of open access mandates on invention},
243
258
author = {Bryan, Kevin A. and Ozcan, Yasin},
@@ -263,6 +278,19 @@ @article{budi2022
263
278
doi = {10.1007/s11192-022-04567-4}
264
279
}
265
280
281
+
@article{butler_explaining_2003,
282
+
title = {Explaining {Australia}'s increased share of {ISI} publications—the effects of a funding formula based on publication counts},
283
+
volume = {32},
284
+
issn = {0048-7333},
285
+
doi = {10.1016/S0048-7333(02)00007-0},
286
+
number = {1},
287
+
journal = {Res. Policy},
288
+
author = {Butler, Linda},
289
+
month = jan,
290
+
year = {2003},
291
+
pages = {143--155}
292
+
}
293
+
266
294
@article{carlin2023,
267
295
title = {Where is all the research software? An analysis of software in UK academic repositories},
268
296
author = {Carlin, Domhnall and Rainer, Austen and Wilson, David},
@@ -385,7 +413,6 @@ @article{colavizza2020
385
413
note = {Publisher: Public Library of Science},
386
414
langid = {en}
387
415
}
388
-
389
416
@article{cole_chance_1981,
390
417
title = {Chance and consensus in peer review},
391
418
volume = {214},
@@ -1088,6 +1115,7 @@ @book{huyer2020
1088
1115
langid = {eng}
1089
1116
}
1090
1117
1118
+
1091
1119
@article{istrate,
1092
1120
title = {A large dataset of software mentions in the biomedical literature},
1093
1121
author = {Istrate, Ana-Maria and Li, Donghui and Taraborelli, Dario and Torkar, Michaela and Veytsman, Boris and Williams, Ivana},
@@ -1108,7 +1136,6 @@ @inproceedings{jackson2016
1108
1136
langid = {en}
1109
1137
}
1110
1138
1111
-
1112
1139
@inproceedings{jacob2019,
1113
1140
title = {FAIR principles, an new opportunity to improve the data lifecycle},
1114
1141
author = {Jacob, Daniel},
@@ -1134,6 +1161,7 @@ @article{janssens
1134
1161
langid = {en}
1135
1162
}
1136
1163
1164
+
1137
1165
@article{johnston2017,
1138
1166
title = {Contemporary Guidance for Stated Preference Studies},
1139
1167
author = {Johnston, Robert J. and Boyle, Kevin J. and Adamowicz, {Wiktor (Vic)} and Bennett, Jeff and Brouwer, Roy and Cameron, Trudy Ann and Hanemann, W. Michael and Hanley, Nick and Ryan, Mandy and Scarpa, Riccardo and Tourangeau, Roger and Vossler, Christian A.},
title = {Shifting the {Level} of {Selection} in {Science}},
2346
+
issn = {1745-6916},
2347
+
doi = {10.1177/17456916231182568},
2348
+
abstract = {Criteria for recognizing and rewarding scientists primarily focus on individual contributions. This creates a conflict between what is best for scientists’ careers and what is best for science. In this article, we show how the theory of multilevel selection provides conceptual tools for modifying incentives to better align individual and collective interests. A core principle is the need to account for indirect effects by shifting the level at which selection operates from individuals to the groups in which individuals are embedded. This principle is used in several fields to improve collective outcomes, including animal husbandry, team sports, and professional organizations. Shifting the level of selection has the potential to ameliorate several problems in contemporary science, including accounting for scientists’ diverse contributions to knowledge generation, reducing individual-level competition, and promoting specialization and team science. We discuss the difficulties associated with shifting the level of selection and outline directions for future development in this domain.},
2349
+
language = {en},
2350
+
urldate = {2024-09-26},
2351
+
journal = {Perspectives on Psychological Science},
2352
+
author = {Tiokhin, Leo and Panchanathan, Karthik and Smaldino, Paul E. and Lakens, Daniël},
2353
+
month = aug,
2354
+
year = {2023},
2355
+
pages = {17456916231182568}
2356
+
}
2357
+
2315
2358
@article{tomkins_reviewer_2017,
2316
2359
title = {Reviewer bias in single- versus double-blind peer review},
2317
2360
volume = {114},
@@ -2361,6 +2404,20 @@ @article{traag2021
2361
2404
langid = {en}
2362
2405
}
2363
2406
2407
+
@article{van_den_besselaar_perverse_2017,
2408
+
title = {Perverse effects of output-based research funding? {Butler}'s {Australian} case revisited},
2409
+
volume = {11},
2410
+
issn = {1751-1577},
2411
+
doi = {10.1016/j.joi.2017.05.016},
2412
+
number = {3},
2413
+
journal = {J. Informetr.},
2414
+
author = {van den Besselaar, Peter and Heyman, Ulf and Sandström, Ulf},
2415
+
month = aug,
2416
+
year = {2017},
2417
+
note = {Publisher: Elsevier Ltd},
2418
+
pages = {905--918}
2419
+
}
2420
+
2364
2421
@book{venturini2021,
2365
2422
title = {Controversy Mapping: A Field Guide},
2366
2423
author = {Venturini, Tommaso and Munk, Anders Kristian},
0 commit comments