Last month scientific publishing giant Elsevier unveiled the CiteScore index, a measure of journal performance based on the number of recent citations. According to Elsevier, a journal’s 2015 CiteScore “counts the citations received in 2015 to documents published in 2012, 2013, or 2014 and divides this by the number of documents published in 2012, 2013, and 2014.” The first-place journal for 2015 is CA: A Cancer Journal for Clinicians, with a CiteScore of 66.45. Nearly a thousand journals at the bottom of the 22 256-journal list have a CiteScore of 0.
In a press release, Elsevier says the CiteScore index “will improve decisions on where to publish, which journals to subscribe to, and when to adjust a journal’s editorial strategy.” However, CiteScore will have to contend with an older and more familiar rival: the journal impact factor, which has become the best-known metric for a journal’s importance despite plenty of pushback throughout its 45-year history.
Eugene Garfield and the Science Citation Index
The impact factor was the brainchild of bibliographer Eugene Garfield. He received a bachelor’s in chemistry from Columbia University in 1949, then went to work for a project at Johns Hopkins aimed at producing an index of medical journal articles. Garfield became fascinated with the problem of finding relevant articles on a particular topic in the rapidly growing scientific literature. After acquiring his master’s from Columbia’s library science program, he went into business for himself, offering his indexing services to pharmaceutical companies and corporate researchers such as Bell Laboratories.

In 1955 Science published Garfield’s article “Citation indexes for science,” in which he proposed publishing an index that tracked citations to individual papers. “The system,” Garfield explained, would list “all the original articles that had referred to the article in question.” He argued that this type of index would be a far more efficient way for scientists to find relevant papers in a particular field as well as criticisms of problematic papers. He also said the index would be valuable for scientists who wanted to see how their work was being received and for historians interested in assessing the relative importance of different scientific papers.
Garfield’s proposal did not immediately catch on. Letters in response to his article pointed to the prohibitive cost of producing such a labor-intensive index. Meanwhile, Garfield started producing indexes for private companies. By the 1960s he had enough capital to undertake his vision of a comprehensive guide to journal citations. In 1964 Garfield’s Institute for Scientific Information (ISI) printed the first Science Citation Index (SCI), listing citations of papers published in more than 2200 journals.
Eight years later Garfield released his first journal impact factors. Once again, he announced his idea in an article for Science, this one titled “Citation analysis as a tool in journal evaluation.” Using the information from the SCI, Garfield and his colleagues “decided to undertake a systematic analysis of journal citation patterns across the whole of science and technology.”
The result was a list of journals ranked by the average number of citations per research article—a number that he called the impact factor. Accounts of Chemical Research took the top slot with an impact factor of 29.285. The top physics journal was Physical Review Letters, in 29th place with an impact factor of 4.911. Garfield hoped the ISI’s new journal impact factors would help scientists decide which journals to read, highlight cutting-edge areas of research for policymakers and funding bodies, and aid librarians trying to curate the ballooning number of academic journals.
Criticisms of the impact factor
Many scientists questioned the value of citation analysis and Garfield’s new metric. Joseph Arditti, a biologist at the University of California, Irvine, wrote to Science that some papers might be frequently cited because they were being widely criticized. H. J. M. Hanley at the National Bureau of Standards worried that the metric might create problematic incentives. Tongue in cheek, he advised future scientists to “cite yourself as often as possible; insist that your work be cited in all articles that you review; and automatically pass articles that already contain a sufficient number of citations to you.”
Journal | Impact Factor rank (score), 2015 | CiteScore rank (score), 2015 |
---|---|---|
Lancet* | 4 (44.002) | 242 (7.72) |
Nature Materials | 7 (38.891) | 11 (25.58) |
Nature | 9 (38.138) | 60 (14.38) |
Science | 16 (34.661) | 80 (13.12) |
Reviews of Modern Physics | 19 (33.177) | 7 (32.79) |
Living Reviews in Relativity | 20 (32.000) | 13 (25.19) |
Progress in Materials Science* | 23 (31.083) | 6 (32.97) |
Cell* | 27 (28.710) | 17 (23.62) |
Progress in Polymer Science* | 29 (27.184) | 9 (28.32) |
Physical Review Letters | 291 (7.645) | 454 (5.76) |
* Journal published by Elsevier |
Despite such concerns, the impact factor caught on among readers, researchers, and librarians. In 1992 the information firm Thomson Reuters acquired ISI and, with it, the right to issue the list of impact factors. Today many journals tout their impact factor on information pages for potential authors and use the number in advertisements. Nature, for example, has an annual sale offering a subscription for the same amount as its current impact factor.
The spread of the impact factor hasn’t silenced the index’s critics. Two recent studies showed that journal impact factors no longer have a strong correlation with the number of citations individual papers in those journals receive. The measure’s ineffectiveness is due to a combination of highly cited papers that skew impact factor calculations and scientists’ tendency to read individual articles from many journals rather than one single issue of a top journal.
Other observers have argued that journal editors have manipulated their impact factors by encouraging self-citation, rendering the metric unreliable. Even Philip Campbell, the editor-in-chief of Nature, has argued that impact factors are overused. Furthermore, Thomson Reuters makes its rankings available only to subscribers, which makes it difficult for researchers whose institutions do not have access.
Although many people would like to see the impact factor’s hold on science shaken, it seems unlikely that Elsevier’s CiteScore—or any other alternate metric—will be the solution. CiteScore has one notable advantage over its older rival: Its rankings and data are freely available. But its basic methodology overlaps enough with that of the impact factor that it seems to suffer from the same weaknesses.
Critics are already pointing to potential conflicts of interest with a journal ranking system that is issued by a journal publisher. Notably, CiteScore divides a journal’s citations by the total number of articles it has printed, including news articles, editorials, and correspondence—a modeling choice that dramatically drops the rank of some top non-Elsevier journals like Nature and Science, as well as Elsevier’s most famous medical journal, the Lancet.
CiteScore probably won’t dethrone the impact factor—but that does not mean the 45-year-old metric’s future is guaranteed.