Science publishers are sending out decidedly mixed messages about how seriously they take the impact factor — – the much-maligned measure of how often the average research paper in a journal is cited.
A record number of journals — 66 of them, including 33 37* new offenders — – 66 of them, including 33 new offenders – have been banned from this year’s impact-factor impact factor list (released today) because of excessive self-citation or because of ‘citation stacking’ (in which (where journals cite each other to excessive amounts). This year, the named-and-shamed titles include the International Journal of Crashworthiness , and theIranian Journal of Fuzzy Systems. Only 51 were banned last year (28 new offenders), and 34 the year before that. Along with the record numbers, Thomson Reuters has posted a new explanation of why it decides to ban journals — – essentially because the self-citations distort the rankings. *Thomson Reuters updated the number of new offenders from 33, to 37, on 20 June.
But while these journals (just 0.5% of the total 10,853) appear to have taken the impact-factor impact factor game far too seriously, other publishers have pledged to ‘reduce emphasis on the impact factor as a promotional tool’. That came as part of a May statement called DORA (the San Francisco Declaration on Research Assessment), which more broadly deplored the fact that the impact factor is used not only to judge journals, but also to judge individual scientists and the quality of their research papers.
In the middle of these two stances — – the don’t-care and the care-too-much — – come the vast bulk of journals, journals whose editors will have been waiting keenly to see their new scores, score, even though they recognize the limitations of the metric. As has been pointed out many times — – and again in DORA — – the impact factor judges only how much a journal is cited on average average, and bears little relation to the individual papers within a journal.
According to one recent paper, moreover, the variance of research papers’ citations around their journal’s impact factor is widening, making the metric an even poorer judge of journal impact, as George Lozano argued earlier this month on the London School of Economics blog.
For what it’s worth, Thomson Reuters says that 55% of journals increased their impact factor this year, and 45% decreased. Among those declining is the world’s largest journal by number of papers published, PLoS ONE One , which has dropped 16% from an average impact factor of 4.4 in 2010 (when it published 6,749 articles), to 3.7 in 2012 (when it published 23,468 articles). Since the journal’s publisher, PLoS, is a signatory of DORA, it probably does not mind.
Indeed, Damian Pattinson, the editorial director ofPLoS ONE, wrote in a blog post about impact factor yesterday: “The “the more notable achievement is that we really are publishing all kinds of research, regardless of its estimated impact, and letting the community decide what is worthy of citation … it’s a good time to remember that it is the papers, not the journals they´re published in, that make the impact.”