Right now I am neck deep in information literacy assessment survey results.
The instruction librarians and I spent a lot of time devising our instructional goals and objectives and then developing the assessment tool (for a one-shot info. lit. session in a first-year writing course). There were a lot of meetings, and it was a lot of collaborative work.
Since the analysis of the results falls largely on me, however, right now the survey design seems like the easy part.
The assessment tool we designed has a lot of short answer questions, which require a lot of thought in order to effectively ‘grade’. We strongly feel that these questions provide a more accurate picture of student understanding, but they can be tricky to analyze.
For example, we asked the students how they can tell the difference between a scholarly and a popular source. I need to figure out how to mark this answer from one of our students:
it will say peer reviewed
Completely correct? Somewhat correct? Not correct?
So I will spend the next few days (weeks?) trying to figure out how to condense all of this information into a nice neat package.
We’ll see how it goes.