The other day I came across “Nefarious numbers” by Douglas Arnold and Kristine Fowler in arXiv. This paper examines how impact factors can be easily and blatantly manipulated.
What is an Impact Factor
For a particular year, the impact factor of a journal is the average number of citations received per paper, published in that journal during the two preceding years. The impact factor as a number of glaring flaws:
- Impact factors vary across disciplines.
- The submission to publication process in a statistical journal can take up to a year.
- The impact factor is just a single statistic out of many possible measures.
- The underlying database contains many errors.
International Journal of Nonlinear Sciences and Numerical Simulation
The Australian Research Council (ARC) recently released an evaluation, listing quality ratings for over 20,000 peer-reviewed journals across various disciplines. This list was constructed through a review process involving academics, disciplinary bodies and learned academies. The outcome is that over 20,000 peer-journals are ranked A* to C, where
- A*: one of the best in its field or sub-field;
- A: very high quality;
- B: solid, though not outstanding reputation;
- C: does not meet the criteria of the higher tiers.
The ARC ranked the international journal of nonlinear sciences and numerical simulation (IJNSNS) as a B. However, in 2008 this journal had an impact factor of 8.9 – more than double the next highest journal in the Applied Mathematics section. As the paper explains, the reason for the large impact factor is easy to see. In 2008, the top-citing three authors to IJNSNS were:
- Ji-Huan He, the journal’s Editor-in-Chief, who cited, within a the two-year window, 243 times;
- D. D. Ganji, a member of the editorial board, with 114 cites;
- Mohamed El Naschie, a regional editor, with 58 cites.
Comparing these numbers with other journals, shows how extreme IJNSNS really is – the next highest impact factor is around 4. Arnold and Fowler also investigate journals where the citations occurs. These journals turn out to be IJNSNS itself or special issues of other journals edited by someone on the IJNSNS board.
Impact Factors for Statistics Journals
The ARC statistics section contains around two hundred journals. Some of these journals are “traditional” statistics journals, such as JASA, RSS, and biometrics. Other journals are more applied, such as Bioinformatics and Mathematical Biosciences. So in the following comparison, I just considered journals classed as “statistics” by the ISI Web of Knowledge. This leaves seventy-seven journals.
The following plot shows the two- and five-year impact factor for the seventy-seven statistical journals, grouped by the ARC rating. The red dots show the median impact factor for a particular grouping.
As would be expected, for the two-year IF there is very little difference between the ARC ratings – although more than I expected. Once we calculate the five-year impact factors, the difference between ratings are clearer. Since many of the group C journals are new, a number of them don’t have five-year impact factor.
Outlying Statistical Journals
There are three journals that stand out from their particular groups:
- Statistical Science, a group A journal. Since this is mainly a review journal, so it’s really not surprising that this has a high impact factor.
- Journal of Statistical and the Stata journal, group C journals. Since these are “statistical computing” journals, it isn’t that surprising that they have high impact
Should we use Impact Factors
The best answer would be no! Just read the first page of “Nefarious numbers” for a variety of reasons why we should dump impact factors. However, I suspect that impact factors will be forced on many of us, as a tool to quantify our research. Therefore, while we should try to fight against them, we should also keep an eye on them for evidence of people playing the system.