The results are nice, but what I really need to know is HOW you did it

For the past few days I’ve been at the 2014 Library Assessment Conference in Seattle, WA. It has been a great conference and I’ve got tons to think about and lots to do when I get back to work.

As I listen to all of the presentations one of the biggest things that strikes me is that I wish folks would spend a bit more time on HOW they did the assessment.  I like hearing about results and changes that were made as a result, but what I really need now are methods.

It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something. - Franklin D. Roosevelt

And so I began compiling a list of the methods that librarians use when we assess our services (including student learning). There are lots of variations of each method, but I’m trying to think of the fundamental methods that assessment folks should learn about:

Data collection:

  • Surveys and tests
    • How to write good questions? When is a survey appropriate (or inappropriate)? What can we learn (and what can’t we learn) from surveys?
  • Focus groups
    • How can we recruit participants? How to structure the conversation? How to use the results?
  • Observations
    • What to look for? How to record your observations?
  • Structured interviews
    • How to write good interview questions? What are good interview techniques?
  • Automatic capture (ILS, ILL, Counter, etc.)
    • What is your library currently collecting? How is it accessed? What are folks currently doing with it?
  • Event capture (not sure what to call this, but I’m thinking of reference stats)
    • What are you currently capturing? How are you using that data?
  • Collecting authentic work (student papers, faculty publications)
    • What to collect? How to encourage faculty and student participation?

Data analysis:

  • Statistical analysis of existing data (ILS, Counter, Ref stats, etc.)
    • I think this is the area that scares librarians the most
    • What types of statistical analysis do we need to know about? How can this help us?
  • Data visualization
    • When is this most appropriate? What kinds of visualizations are the most helpful? What tools should we use?
  • Rubrics
    • How can we develop a good rubric? When should we use such a time intensive method?
  • Content analysis
    • How do we develop coding categories? What software do we use? How to interpret and share the results?
  • Citation analysis
    • What can this tell us? Do we look at student papers or faculty publications? What metrics might be the most helpful?

Obviously, each of these categories contains many variations on the theme, and expecting one person (or all library staff) to know about all of these methods is unrealistic. But we should probably be aware of what methods exist, so that when we need one, we can get some help in applying it.

What other methodologies are libraries using?

Advertisement

Assessing Student Learning During Reference Transactions

Last week I had the pleasure of attending and presenting at the annual conference for SUNY librarians at the Fashion Institute of Technology in New York City (yes, FIT is a SUNY school!).

With my exceptional colleague Kim Hoffman, we discussed a small project we did in the Fall 2011 semester to try to assess what students learned at the reference desk.

Our abstract for the presentation and our actual slides are below.  You can also view the slides on Slideshare.net to see the speakers notes.

Going Beyond Anecdotes: Assessing Student Learning During Reference Transactions

Reference services comprise one of the most important teaching opportunities within academic libraries.  While we typically assume that students learn from these interactions, we rarely have evidence to demonstrate what students actually learn.  Librarians at many institutions track the skills taught via reference statistics gathering programs, but we rarely ask students what they find most meaningful.

At SUNY Geneseo, we wanted to know what students were learning via reference transactions, beyond typical counts of reference questions or user satisfaction surveys.   These reference transactions occur in several settings, including at the reference desk, during scheduled reference consultations, and through impromptu questions at various locations.  Building on assessment techniques such as the One Minute Paper traditionally used in library instruction settings, students were given a survey after each reference transaction that simply asked “What did you learn today from your meeting with the librarian?”

In order to categorize responses, librarians developed a list of commonly taught concepts, skills, and tasks seen via reference services and library instruction.  Student responses were assigned one or more items from this list of concepts, allowing us to easily evaluate which skills were most frequently reported.

While this survey explores which concepts students report learning, it does not measure their actual mastery of the skills reported and is therefore an incomplete examination of student learning at the reference desk. Despite these limitations, this study offers a useful improvement to standard reference assessment efforts, typically based on assumptions and anecdotes.

Keeping track of it all

Along with every other department on campus, libraries are under increasing pressure to evaluate their services – everything from student learning outcomes to expenditures.

Tally marks
Are your reference statistics a comedy or a tragedy? Image courtesy of Flickr user aepoc

Assessing the value of libraries in these areas requires the collection of lots of information.  Data of all kinds needs to be collected, analyzed and shared.  So what data do we collect, and where do we store it?

We have lots of silos for relevant information here in my library, and none of us are convinced that we are doing things in the best way possible.  Our collection of statistics related to reference and research help services provides one example.

The most obvious place where this happens is the reference desk.  To keep statistics about what happens here we use LibStats to record:

  • the question itself,
  • the format (phone, walk-up, IM)
  • the patron (student, community member, faculty member)
  • how long it took to answer

But our research help doesn’t end at the reference desk.  One of the big services we provide is research consultations by appointment.  Students (and faculty or community members) can request an appointment and their request will be routed to the most appropriate librarian.  (No one wants me answering in-depth research questions about primary sources in 17th century European history, for example.)

These requests come via an online form that dumps information into a home-grown MS Access database.  For this kind of appointment-based research help we collect the same information recorded for reference desk questions, but also information about the student and the course the project is related to.

But our research help comes in other forms, too.  We have an email-based ask a librarian service, and we all get email questions directly from students and faculty.  At this point we aren’t very good at recording this type of information.  What system should we use?

We also aren’t very good at recording questions that come directly to us from faculty, either via email, phone or in person.

And I haven’t even started to discuss the challenges of assessing the student learning outcomes associated with research and reference help services.

As a result of all this, it is difficult to get one complete picture of our involvement in research across campus.  It’s something we are currently working to resolve.

And the biggest question that will influence how we do this is

“What do we want to do with this information?”

Change our services?  Change our staffing levels? Merely collecting the data won’t be of use to anyone.  The answer to these questions will influence the type of data that we  collect and the tools we use to collect it.

And once we figure out all that, then we just need to remember to record everything.

Assessing Undergraduate Research Experiences

As part of some work I’ve been doing this summer for several different projects, I’ve started compiling a list of papers related to the assessment of undergraduate research experiences.  These papers include works that do simple surveys of student attitudes and papers that try to measure student learning outcomes (which is more difficult).

In order to share this list and hopefully get other folks to add to it, I’ve created an open group on Mendeley.

A collection of articles that discuss assessment methods used to describe a variety of undergraduate research experiences, including course-based research and traditional mentored research. Assessment methods include indirect surveys of student attitudes, direct methods of assessing student learning outcomes and many other strategies.

So far, the list has a modest 11 citations, but I’ll probably be adding a few more over the next few days.

So, if you’re using Mendeley, join the group and add resources to the list.

If you’re not using Mendeley, why not?

 

Asking the right questions

Sometimes, when I am asked to teach a library session for upper level classes, I will ask the faculty members what their student know about a particular aspect of library research.  They, in turn will ask their students some variation on “Do you know how to use Scopus?”  Unfortunately, the answers to this question are often less than useful – almost all students will say yes because they can all type a few keywords into the search box.

This is apparent in the formative assessment I do for an upper level class on evolution.  We want to know what the students know about the primary literature, so we ask them to fill out a survey prior to the library session.  Students are asked two open ended questions:

  1. What is a primary research article?
  2. How can you distinguish a primary research article from other types of scientific articles (websites, conference proceedings, review articles, news articles, etc.)?

At this level, the students can easily answer these questions with reasonably well thought out answers.  But these questions don’t evaluate if students can apply that knowledge.  So we add one more task to the formative survey.

Students are asked to look at 6 different examples of the scientific literature and indicate which ones would be considered “primary research.”  These are the items I used this semester (sorry, many of these are behind a paywall):

  1. Reproductive skew and selection on female ornamentation in social species – A letter from Nature, the methods are summarized at the end, and complete methods are in the supplemental material
  2. The fickle Y chromosome – A news story from Nature news reporting on a primary research article in the same issue
  3. The Evolution of Early Foraminifera – A primary research article from PNAS
  4. Genetic recombination – A wikipedia article
  5. Defeating Pathogen Resistance: Guidance from Evolutionary Theory – A commentary
  6. Evolutionary dynamics of a natural population: the large cactus finch of the Galapagos – A book

When I get to class, we start a discussion of the different types of scientific literature using these selections as a guide.

Most students understand that #1 is a primary research article, and a slightly smaller percentage pick #3 as well (the general title seems to throw them off).

None of the students pick #2 (news) or #4 (Wikipedia), but we talk about how resources like this can still be useful to them – using their references or for help in understanding complicated topics.

The Commentary (#5) is a format that is normally new to them, and we talk a bit about where you can find them and what their purpose is (highly varied).

The book (#6) is the most confusing for the students, and 50% of the students will often call this primary literature.  The book presents the final results of a “10 year study”, most of which has previously been published in peer-reviewed articles, but probably not all.  This precise example is in a bit of fuzzy territory, and it provides us with an opportunity to discuss the role of books in the scientific literature.

I like teaching this class, and I think it’s useful for the students.  It shows the importance of asking the right questions – asking someone “Can you do this?” might get a very different response than “Show me how you do this.”