I recently read a very interesting article in PLoS ONE examining various measures of the scientific importance of particular journals:
Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A Principal Component Analysis of 39 Scientific Impact Measures PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0006022
The article isn’t breaking new ground in its criticism of the impact factor, calculated by Thomson Scientific. However, the statistical analysis comparing multiple measures of importance sheds new light on the relationship between the various measures.
The authors analyzed 39 different impact measures that fall into two main groups: those that look at citation counts and those that look at online usage data (page views and downloads). A few additional measures that take account of online social networks were also included.
In general, the usage measurements cluster more closely together than the citation measurements – they are measuring approximately the same thing.
As a result of this analysis, they were able to differentiate measures that looked at immediate (“rapid”) use vs. longer term use (“delayed”), and to distinguish measures that look at how popular a resource is vs. how prestigious a resource is.
All of this leads us to repeat the problem posited by the authors: we don’t have an accepted definition of what “impact” really is.
Publications, institutions, and tenure committees all have different needs and requirements. For example, the faculty at one large research institution may be more concerned about prestige, while another may need to market their programs and examine their popularity. I think this analysis shows that folks can and should be a bit more choosy when selecting the measure they use to judge their competitors, their research, and their colleagues.