Headline: Traditional librarians and information scientists start to talk to one another!

One of the great things about the Geological Society of America Annual Meeting is that the organization for geoscience librarians, the Geoscience Information Society (GSIS) meets right along with them as an official part of the conference.  This brings librarians and scientists together in a wonderfully engaging setting.  Now, I’m betting that most of the geoscientists present don’t realize that there are librarians in their midst, but it’s great start to be in the same building for a little while.

Librarians tend to care a bit more about metadata than scientists - they just want to do the science.

I’ve attended sessions related to data preservation and more traditional library related stuff.  Permeating the talks at these sessions is the idea that librarians and the scientists dealing with data and information seem to be at the beginning of discussions about how they should work together.  This is encouraging.

The most visible folks on the scientists side are a group of folks from the USGS and state geological surveys.  These organizations have federal or state mandates to make their data available, so some scientists at these organizations have been tasked to develop the complex systems needed to share this information (The Geoscience Information Network, for example).  While in some cases, the scientists are unfamiliar with the systems and metadata standards developed for libraries that could assist them, others are building on the work of librarians, and others are encountering brand new issues that need new standards and practices.

I like seeing this, and I think we need to see more of it.  And for the most part, I think that the librarians have the responsibility of reaching out to the scientists (online, in person, at conferences, etc.) to start discussions about how we can help.

What I’m not entirely clear about, is how I can directly impact these efforts.  My tentative thoughts on this include working with faculty at my (small) institution to make their data accessible via appropriate external repositories (but do they want to share?), and working with the Geoscience Information Society to reach out to scientists to continue the conversation.  I’m not a cataloger (and I don’t want to be one), but their metadata experience could be highly valuable to scientists trying to manage their large quantities of information, and we need to try to let them know that.

Advertisement

I believe I would be a “Data-Driven Nerd”

CC licensed image from flickr user bionicteaching

Virginia Hughs just posted her classification of scientists, and I believe this classification helps me understand why I ended up in librarianship.  According to her classifications (in which I see almost all of my scientist friends), I would be a “Data-Driven Nerd”:

These are guys and gals who seem to spend every waking hour in the lab. They’re precise and thorough. They like new technologies that get them better — and more, always more — data. They hate writing up their papers because there’s never enough good data to say something definitive. They generally see no need for (and have no patience for) journalists, unless lapsing into an effusive geek-out moment over some surprising new data.

I think it was my love of data and data analysis (although I didn’t love collecting the data in the field so much) that pushed me over into librarianship, where I get to look at other peoples data and publications all day.

I was a geologist by training, but I always felt like a bit of an outsider because while I love being outside, I didn’t love fieldwork particularly.

Check out her excellent post:  where do you fit in?

Assessing Information Literacy Skills in First Year Students

A new open access journal, Communications in Information Literacy, recently published an article about assessing library instruction for first year students.  The paper caught my eye because I’m working on some similar things here at Geneseo.
ResearchBlogging.org
The study sought to determine if students’ information literacy skills and confidence with research improved more with a greater number of librarian-led information literacy sessions.  The author used a pre-test and a post-test to examine students’ attitudes and stated behaviors.  She used likert-style questions to assess students’ previous use of information sources and their confidence with various information related tasks.  One group of students received the typical one-shot information literacy session in a first year writing and critical thinking class.  Another group received two or three information literacy sessions over the course of the semester.

The author is very clear about outlining the challenges we all face in trying to assess information literacy instruction.  Most notably, it is almost impossible to control for the wide variety of variables that have an impact on student information literacy skills:

  • Prior information literacy instruction in high school or other venues
  • Prior practice doing scholarly research
  • Student intelligence and creativity
  • Opportunity to practice skills learned in an information literacy session (and differences in the assignment requirements)
  • Differences in scholarship between various disciplines

Some information related to the factors listed above is relatively easy to obtain (although perhaps not so easy to quantify).  Course faculty can be a source of information about assignment requirements, and will set the standards for more or less practice information literacy and research skills.

On the other hand, getting information about prior instruction and practice normally relies on students’ self reporting, which is not always accurate.

In addition to the likert-style attitudinal questions, the author analyzed student bibliographies.  She looked at the different types of sources used, and whether they were available through the library or through other sources.

The latter question is challenging.  Typically, a student doesn’t need to use a library database to access the full text of articles if they are on the campus network.  As a result, they could easily have used one of many search engines and not even realized they were using library resources.  On the other hand, use of library databases that resulted in articles requested through Interlibrary Loan would not count as library sources.  We emphasize ILL at my institution, however, so perhaps it isn’t used as much at other institutions.

All of this begs the question – are the information literacy sessions we teach an effective way of teaching students research skills?

The author of this paper concludes that there is some positive benefit to the increased number of information literacy sessions, although the data seem a bit more mixed to me.

I wish that the author had actually tested students research skills.  While it may be much more difficult to evaluate, student confidence does not necessarily correlate with student skills.

Julie K. Gilbert (2009). Using Assessment Data to Investigate Library Instruction for First Year Students Communications in Information Literacy, 3 (2), 181-192

Assessment without review, analysis and change is a waste of everyone’s time

Today I’ve been thinking about assessment:

  • I created a short survey to assess student learning after a one-shot library instruction session.
  • I compiled student bibliographies from Fall 2009 courses I’ve worked with, in the hopes of analyzing what these students actually did.
  • I’ve been thinking about how to effectively assess the information skills students (should have!) acquired during a Spring 2010 course I met with on 5 different occasions.
  • I made some final edits to a very brief survey of user satisfaction at the reference and circulation desks (modeled after Miller, 2008).
Scantron sheet
Hopefully we don't try to assess our students to point of exhaustion! Image courtesy of Flickr user MforMarcus.

I’m in the process of collecting a lot of data about how I well do my job.

What’s the next step?

If I just collect this data and report on it without making any changes, I have probably wasted everyone’s time.  It is unlikely that the assessments will indicate that I am doing everything perfectly.  The goal of assessing service, student learning, user satisfaction, etc. is to make these things better.

What kinds of changes can you make:

  • Change your focus – In some classes I realized that students had a very good understanding of one concept I was trying to teach, but a poor concept of another.  I was able to change the focus of my instruction to focus more on
  • Change your goals – In some cases your assessment might reveal that your original goals are out of line with what students need.  This happened at my library in the one-shot we taught for the First-Year writing class.  We were able to re-align our goals with student needs.  We’ll see if this helped our students when we do an assessment at the end of this semester.
  • Go back to something that was working better before you made a change – The user satisfaction survey I’m working on right now is being done just prior to some big changes in the reference/circulation/service desks at my library.  We plan to re-do the survey in the Fall and again in Spring 2011.  Perhaps we’ll find that the changes result in a decrease in user satisfaction, although I sure hope not.  It is theoretically possible that we will need to roll back some of the changes we made.

So, anyone have a quick and easy way to analyze student bibliographies?

Data, Data, Data – ScienceOnline2010

The other major theme to emerge from the sessions I attended at ScienceOnline2010 was data.  All kinds of data.

Data storage - old and new
Data storage - old and new. Courtesy of Flickr user lan-S

Data about articles and journals.  Data about oceans and fish and climate.  Data about scientists, their DNA, their babies (and their babies DNA too, I suppose).

20 years ago, getting your hands on a data set meant knowing someone who knew someone who might be able to send you a disc.

These days, more and more data sets are being shared on the open web.  Sometimes they are easy to find and use, and sometimes not so much.  Sometimes the data require a bit of skill with Excel, and sometimes the data require multiple servers and extensive programming skills.

But it’s out there.

I attended a very interesting session led by John Hogenesch about cloud computing. Some of this was way over my head – I’m not as familiar with bioinformatics as I’d like to be one day, and I only have minimal knowledge of how geneticists are actually using this information.  None-the-less, it was informative to learn about the various trends in cloud computing.  Some of them I am already very familiar with – like wiki’s, Gmail, Google Docs.  I learned more about some services that I only know a bit about.  For example, Google Knol is being used by PLoS to write and publish their “Currents Influenza” online.  Since the authoring, editing and publishing is done online, the journal can quickly get items published and available.  I learned about some services that allow for remote storage and query of information, and how these services can be less expensive (and easier to run) than hosting your own servers.

Jacqueline Floyd and Chris Rowan presented a session on “Earth Science, Web 2.0+, and Geospatial Applications”.  Since my background is in geology, I was particularly intrigued by some of the resources discussed here.  The discussion at the end of the talk centered around some of the difficulties of finding spatial information (some of which I have discussed before).  For example, the USGS provides a wide range of spatial data – geophysical data, hydrological data, geologic data.  Some of this is easier to find (and use) than others.  For example, recent earthquake data is available is an easy to use Google Earth format, but data older than one month requires more complicated searching  (including detailed latitude and longitude coordinates) and the search output requires manipulation to create a visualization.  It could be easier.

One of the last sessions I attended at the conference was a presentation by PLoS managing editor Peter Binfield about article level metrics.  Peter discussed some of the things that the PLoS journals are doing to attempt to measure the impact of individual articles, not the entire journal.  The new metrics were announced in a blog post last summer, and you can see the metrics at work on any article in any of the PLoS journals.  They are using open data and API’s from lots of sources: social bookmarking (like CiteULike and Connotea), citation information (from Google Scholar and Scopus), page views and PDF downloads and lots more.  I think that this is an exciting new way to shed more light on what is going on with individual articles, but there are some challenges ahead.  How will tenure committees analyze this stuff? (Will they bother?)  What does it mean if your article was only downloaded 300 times but your colleague (in a larger discipline like genetics) had an article downloaded 3000 times?  And all of this data they are collecting can lead to lots of analysis.  Librarians have traditionally used citation analysis as a way of understanding the literature of a community, and hopefully these new metrics will give them more tools to use.