Does Bibliometric Research Confer Legitimacy to Research Assessment Practice? a Sociological Study of Reputational Control, 1972-2016
A growing gap exists between an academic sector with little capacity for collective action and increasing demand for routine performance assessment by research organizations and funding agencies. This gap has been filled by database providers. By selecting and distributing research metrics, these commercial providers have gained a powerful role in defining de-facto standards of research excellence without being challenged by expert authority.
A simple proposal for the publication of journal citation distributions
Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs. Application of this straightforward protocol reveals the full extent of the skew of these distributions and the variation in citations received by published papers that is characteristic of all scientific journals. Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF. We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.
As of May 2018, CORE has aggregated over 131 million article metadata records, 93 million abstracts, 11 million hosted and validated full texts and over 78 million direct links to research papers hosted on other websites.
A study identifies papers that stand the test of time. Fewer than two out of every 10,000 scientific papers remain influential in their field decades after publication, finds an analysis of five million articles published between 1980 and 1990.
Open Science and Its Role in Universities: A Roadmap for Cultural Change
LERU's paper discussing the eight pillars of Open Science identified by the European Commission: the future of scholarly publishing, FAIR data, the European Open Science Cloud, education and skills, rewards and incentives, next-generation metrics, research integrity, and citizen science.
Peer Review and Citation Data in Predicting University Rankings, a Large-Scale Analysis
When citation-based indicators are applied at the institutional or departmental level, rather than at the level of individual papers, surprisingly large correlations with peer review judgments can be observed.
Against Metrics: How Measuring Performance by Numbers Backfires
By tying rewards to metrics, organisations risk incentivising gaming and encouraging behaviours that may be at odds with their larger purpose. The culture of short-termism engendered by metrics also impedes innovation and stifles the entrepreneurial element of human nature.
The Academic Papers Researchers Regard as Significant Are Not Those That Are Highly Cited
Academia has relied on citation count as the main way to measure the impact or importance of research, informing metrics such as the Impact Factor and the h-index. But how well do these metrics actually align with researchers’ subjective evaluation of impact and significance?