A few things I read today

Researching then and now – This is an amazing blog post describing pre-internet research strategies.  Importantly, it captures the feelings and emotions of that type of research:

The ease of making photocopies led to a new joy: collecting them. There was a tendency to believe that something worthwhile had thus been achieved and that the content could be absorbed through osmotic proximity to the material.

How do you get feedback from library users? (Or, Beating Survey Fatigue…) – The comments on this post are the most useful.  As we all struggle to assess library services in line with institutional missions and student learning outcomes, there are some useful suggestions presented here.  The author illustrates the challenges of asking people what they want with a quote from Henry Ford:

On the Model T Ford: “If I’d asked people what they wanted, they’d’ve said a faster horse…”

Conference pitch: Research For More Effective Research – From Heather Piwowar, an idea for a conference that discusses research about how to make research more effective.  I want to go to this conference.

This sort of “research for more effective research” is already being done in several scattered areas, but it suffers from a lack of broad community and infrastructure for action.  Bringing together investigators, domain researchers, funders, publishers, educators, tool-builders, and experts in cultural change would allow exchange of methods, better understanding of which problems are most pressing, and support for making a difference.

Alternative metrics at ScienceOnline2011 and beyond

After a very interesting session on alternative metrics at ScienceOnline2011, I have been trying to figure out if I should write anything about it here.  My thoughts are rather scattered, incomplete and probably lacking in originality, but I decided to put something up anyway to help me sort through things.

100 Feet of tape measure
CC image courtesy of Flickr user karindalziel

The first thing I’m thinking about related to alternative metrics is “What problems are we trying to solve?”  I am very familiar with the criticisms of the impact factor, but I’m interested in returning to the basic questions.

What are researchers doing that some kind of metrics could help them with?

  • Find a job
  • Get tenure
  • Get promoted
  • Have their research read by a lot of folks
  • Publish in places that make them look smart
  • Quickly evaluate how smart other people are
  • Convince people to give them money for their research
  • Do more research

At the moment, the impact factor can affect all of these things, although it wasn’t developed to do that (see this article for a history of its development and some defense of the number).

What kind of quantitative information do we need to help researchers accomplish the things they want to do?

Various metrics have been proposed to help researchers:

  • rate journals (average quality)
  • rate individual articles
  • rate authors

I think the fundamental question comes back to how do these metrics help the researcher make decisions about their research and publication.

The second big question I have is whether or not the problem is all about the limitations of the impact factor or if the problem is with the academic culture that misuses existing metrics?  New metrics being proposed are not perfect, and I would argue that any quantitative measure of scholarship will have flaws (though perhaps not as many as the singular reliance on IF).

As more scholars do work outside of traditional peer review journals, I think tenure and promotion committees will feel more pressure to expand beyond simply looking at the impact factor.  We have some interesting examples of scholarship beyond the peer reviewed literature, and several new journals making the case for assessing individual articles rather than the journal as a whole.  These developments will also put

But those of us who are very connected to the online scholarly community of blogs, twitter feeds and social networks need to remember that most faculty are still not aware of or interested in what this community has to offer.  As a result, we can develop all of the alternative metrics we like, but until there is greater acceptance of these “alternative” forms of scholarship, I doubt that any alternative metric will be able to gain a prominent foothold.

At the ScienceOnline2011 conference, Jason Hoyt, Chief Scientists at Mendeley made the argument that we should put our energies into whatever alternative metric will bring down the impact factor.  I wonder if that is putting the cart before the horse.  Perhaps we need to get faculty (even those who have no interest in the blogosphere) to acknowledge the limitations of the impact factor first.

Of course, it is much easier to develop a quantitative measure of scientific impact than to change the culture of scholarly disciplines.