After a very interesting session on alternative metrics at ScienceOnline2011, I have been trying to figure out if I should write anything about it here. My thoughts are rather scattered, incomplete and probably lacking in originality, but I decided to put something up anyway to help me sort through things.
The first thing I’m thinking about related to alternative metrics is “What problems are we trying to solve?” I am very familiar with the criticisms of the impact factor, but I’m interested in returning to the basic questions.
What are researchers doing that some kind of metrics could help them with?
- Find a job
- Get tenure
- Get promoted
- Have their research read by a lot of folks
- Publish in places that make them look smart
- Quickly evaluate how smart other people are
- Convince people to give them money for their research
- Do more research
At the moment, the impact factor can affect all of these things, although it wasn’t developed to do that (see this article for a history of its development and some defense of the number).
What kind of quantitative information do we need to help researchers accomplish the things they want to do?
Various metrics have been proposed to help researchers:
- rate journals (average quality)
- rate individual articles
- rate authors
I think the fundamental question comes back to how do these metrics help the researcher make decisions about their research and publication.
The second big question I have is whether or not the problem is all about the limitations of the impact factor or if the problem is with the academic culture that misuses existing metrics? New metrics being proposed are not perfect, and I would argue that any quantitative measure of scholarship will have flaws (though perhaps not as many as the singular reliance on IF).
As more scholars do work outside of traditional peer review journals, I think tenure and promotion committees will feel more pressure to expand beyond simply looking at the impact factor. We have some interesting examples of scholarship beyond the peer reviewed literature, and several new journals making the case for assessing individual articles rather than the journal as a whole. These developments will also put
But those of us who are very connected to the online scholarly community of blogs, twitter feeds and social networks need to remember that most faculty are still not aware of or interested in what this community has to offer. As a result, we can develop all of the alternative metrics we like, but until there is greater acceptance of these “alternative” forms of scholarship, I doubt that any alternative metric will be able to gain a prominent foothold.
At the ScienceOnline2011 conference, Jason Hoyt, Chief Scientists at Mendeley made the argument that we should put our energies into whatever alternative metric will bring down the impact factor. I wonder if that is putting the cart before the horse. Perhaps we need to get faculty (even those who have no interest in the blogosphere) to acknowledge the limitations of the impact factor first.
Of course, it is much easier to develop a quantitative measure of scientific impact than to change the culture of scholarly disciplines.