Dispelling A Few Common Myths About Journal Citation Impacts

Author: Eugene Garfield
Date: February 3, 1997

Last October I participated in a conference on research assessment in Capri, Italy. The various discussions and presentations at this meeting reminded me that there are still widespread misunderstandings-indeed, myths-about citation analysis, especially with respect to journal impact. For those readers who are not aficionados of citation analysis, journal impact factors as used in the Institute for Scientific Information's (ISI's) Journal Citation Reports are a simple ratio of citations and papers. They are calculated by dividing the number of current-year citations (for example, 1997) to a journal's papers published in the previous two years (that is, 1996 and 1995) by the combined total of these papers.

Journal impact factors are used for a variety of purposes. For example, librarians may consider impact factors, as well as several other important criteria, in their decisions on which journals to include in their collections. Journal impact has also become a staple in many types of analyses conducted by scientometricians. And impact factors are increasingly used by publishers to promote and market their journals to subscribers and advertisers.

The primary impetus for ISI to develop impact factors in 1973 was the need of these various users to compare the "influence" or "performance" of small vs. large journals as well as journals within small or large research disciplines. Therein lies the origin of the most persistent and common myth about journal impact-that is, the size of a journal or its discipline is the major determinant of its impact factor. It is generally assumed that biochemistry journals have high impact simply because there are so many publishing biochemists. On the other hand, the assumption is that the impact of mathematics or botany journals is lower simply because there is less published in these smaller fields.

It is intuitive that the largest journal or field will have the highest impact. But size alone does not determine impact. For example, assume that the typical journal across different fields contains an average of 30 references per source article (R/S). Thus, a large journal with 1,000 published articles per year produces 30,000 cited references. A smaller journal containing 100 articles per year produces 3,000 citations. Assuming also that the age of the literature cited in these references is equivalent, both journals would yield the same impact of 30 when you divide citations by articles.

However, the average R/S factor as well as the average age of the literature being cited vary widely among journals and fields. And both of these averages, not size, are the critical determinants of the impact of a journal or discipline.

For example, molecular biology and biochemistry papers today contain about 45 references. Moreover, the average age of the papers being cited is about seven years. In contrast, math papers average about 15 references, as do those in botany and taxonomy. The average age of the cited papers is 20 or even 30 years in these fields. In certain social sciences and humanities fields, the age of the cited literature may be even greater. Thus, irrespective of the comparative size of the literature in these fields, biochemistry journals will have higher impacts than math journals because they contain more references to more recently published papers. Unless one takes into account both the number of references per paper as well as the "immediacy" of the literature being cited, comparisons of journal impact between-and also within-fields will be invidious.

Another commonly held myth I would like to dispel is that method papers rather than "pure" research articles account for the high impact of journals, such as the Journal of Biological Chemistry (JBC). This myth springs from the fact that many widely used-and therefore highly cited-method papers have been published in JBC as well as other journals. The classic example is Oliver Lowry's 1951 paper entitled "Protein measurement with the Folin phenol reagent" (JBC, 193:265-75), which has been explicitly cited more than 250,000 times and continues to receive more than 6,000 hits per year. But citations to this vintage classic in no way influences JBC's current high impact, which is based on its previous two years' published papers, as noted previously. Nor would this paper's citations affect JBC's impact were it calculated over a five-, 10-, or 15-year period. These variable-year impact calculations can now be easily obtained from ISI's Journal Performance Indicators on Diskette.

As I have recently indicated (E. Garfield, "How can impact factors be improved?," British Medical Journal, 313:411-2, 1996), cumulative impacts for each year of a journal can vary because of the effect of super-cited papers. The key point, however, is that there are thousands of method papers that are cited no more or less than most other papers, except review articles. Reviews achieve higher than average impact, but out of the 25,000 reviews published per year, there is enormous variation.

What needs to be remembered is that a journal's current impact factor is influenced more by its R/S average and the recency of the literature it cites, not its size.

(The Scientist, Vol:11, #3, p.11, February 3, 1997)