The results are nice, but what I really need to know is HOW you did it
For the past few days I’ve been at the 2014 Library Assessment Conference in Seattle, WA. It has been a great conference and I’ve got tons to think about and lots to do when I get back to work.
As I listen to all of the presentations one of the biggest things that strikes me is that I wish folks would spend a bit more time on HOW they did the assessment. I like hearing about results and changes that were made as a result, but what I really need now are methods.
And so I began compiling a list of the methods that librarians use when we assess our services (including student learning). There are lots of variations of each method, but I’m trying to think of the fundamental methods that assessment folks should learn about:
- Surveys and tests
- How to write good questions? When is a survey appropriate (or inappropriate)? What can we learn (and what can’t we learn) from surveys?
- Focus groups
- How can we recruit participants? How to structure the conversation? How to use the results?
- What to look for? How to record your observations?
- Structured interviews
- How to write good interview questions? What are good interview techniques?
- Automatic capture (ILS, ILL, Counter, etc.)
- What is your library currently collecting? How is it accessed? What are folks currently doing with it?
- Event capture (not sure what to call this, but I’m thinking of reference stats)
- What are you currently capturing? How are you using that data?
- Collecting authentic work (student papers, faculty publications)
- What to collect? How to encourage faculty and student participation?
- Statistical analysis of existing data (ILS, Counter, Ref stats, etc.)
- I think this is the area that scares librarians the most
- What types of statistical analysis do we need to know about? How can this help us?
- Data visualization
- When is this most appropriate? What kinds of visualizations are the most helpful? What tools should we use?
- How can we develop a good rubric? When should we use such a time intensive method?
- Content analysis
- How do we develop coding categories? What software do we use? How to interpret and share the results?
- Citation analysis
- What can this tell us? Do we look at student papers or faculty publications? What metrics might be the most helpful?
Obviously, each of these categories contains many variations on the theme, and expecting one person (or all library staff) to know about all of these methods is unrealistic. But we should probably be aware of what methods exist, so that when we need one, we can get some help in applying it.
What other methodologies are libraries using?