Stop saying “print” when that’s not what you really mean

At the reference desk, I occasionally see students who are looking for only “print” resources. Their professors have asked them to find journal articles or books, but are requiring them to use “print” resources. The challenge here is that their professors don’t really mean “print.” Most often, they want their students to find formally published, peer-reviewed, or scholarly sources, not blogs, wikis, or random websites. They use the term “print” because in previous decades, these types of sources would have been found as physical copies. I understand what they are trying to tell students, but students don’t understand this no-longer-relevant distinction.

I would guess that about 97% of all the journal articles that the students at my institution have access to (not including ILL) are only available to them online. Of the print journals available at my institution, the majority of these are older volumes that we haven’t (yet) replaced with online back-files. If students stuck to the terminology used in the assignment, it would mean a vast body of research was unavailable to them.

Of the online items, some journals still publish a print version, but many do not. Some high quality journals were born digital and many have stopped publishing print versions due to decreased demand.

The requirement of “print” only resources, would also exclude eBooks. My institution has access to 35,000 eBooks, and we will soon be getting a collection of 25,000 more. Are these excluded because of the mode of access?

An undergraduate student, particularly a first or second year student, is still trying to figure out the difference between scholarly and not-scholarly sources. It takes some practice for them to understand the different types of sources that are available online. Asking them to figure out if a particular source is the kind of thing that may have once been published in print is not practical.

So instead of asking students to use “print” resources, be more specific.

“For this project, you may use peer-reviewed scholarly sources and published books.”


“For this project, you may use scholarly journals, magazines, newspapers, or books.”

[Rant over.]


John Oliver as an Instruction Librarian

I’ve been a big John Oliver fan from his time at the Daily Show and his weekly podcast with Andy Zaltzman, The Bugle, but I never knew just how effective he could be as an instruction librarian. Just watch this video below as he carefully examines the emerging media form of native advertisements and the problems associated with this un-holy alliance between the business and editorial sides of media organizations.

Update: Just yesterday, Advertising Age published an article about the New York Times shrinking the labeling of it’s Native Advertisements. It is difficult enough for people to recognize the difference real ads and news stories, this makes things much harder.

Student learning at the reference desk: bringing home notes

For as long as there have been reference desks in libraries, there has been a debate and a discussion about the nature of the reference transaction.  In some cases, the reference transaction is simply a question and answer exchange. The patron asks the question, the librarian finds the answer and passes it along:

Q: Who was the 32nd president of the united states?

A: Franklin D. Roosevelt

But in many cases, the reference transaction is about much more than simply providing answers, it’s about teaching the patron how to find the answer themselves.

Q: Who was the 32nd president of the united states?

A: Well, a quick google search leads us to the Wikipedia list of presidents.  Here it lists Franklin D. Roosevelt as the 32nd president, and provides some citations for that information.  If you are just curious, the Wikipedia list will be perfect, but if you need to cite this in a paper, you might want to refer to the White House website, which would be more authoritative and provides some good biographical information.  Do you need to find more information about Roosevelt?  if so, we may have some biographies of him in our collection….

Just like any other learning opportunity, a big part of the whole experience is retention – do the students/patrons remember what you taught them an hour from now, a week from now or a month from now.

Taking notes is a useful tool in the learning process. CC image courtesy of Flickr user geekcalendar.

With that in mind, librarians at my library will be working on some new practices for the Spring semester.  Building on the tendency of librarians to jot down search terms or possible databases while working with a patron, we will be making a more concerted effort to write down notes as we answer a question and give those notes to the student when we are done.  The idea here is that students will better be able to retain the knowledge they gained if they can refer back to the notes that were taken.

We’d like to try and capture information on this student learning, so we are going to try two things.  First, we will be using standardized carbonless duplicate note-taking forms.  This way the student gets a copy of the notes, and we can retain a copy for future study.  Second, we hope to combine this with an assessment of student learning at the reference desk (or at least an assessment of what students think they learned at the reference desk) by asking students to fill out a brief survey asking them what they learned.

Hopefully we can be more deliberate in making sure students walk away with a record of the transaction and this will increase the learning that goes on during a reference transaction.

I’ll let you know how it works out.

Guest post on ACRLog

Readers of this blog may be interested in a guest post I wrote for the Association of College and Research Libraries blog, ACRLog.

Last week I taught an information literacy class to a group of senior Chemistry students. We didn’t talk about databases or indexes, we talked about numbers. We talked about impact factors and h-indexes and alternative metrics, and the students loved it. Librarians have used these metrics for years in collection development, and have looked them up to help faculty with tenure and promotion packets. But many librarians don’t know where the numbers come from, or what some of the criticisms are.

Read the rest of the post here.

Where should our information literacy standards come from?

From the ACRL? Or from the disciplinary organizations?

The ACRL Information Literacy standards have often frustrated me.  I struggle to find their usefulness to my day-to-day work, since the content of most of my information literacy sessions comes from conversations with the professor of the class, and are geared directly to student assignments.  As such, use of the standards usually involves fitting what I’m already doing back into the structure of the standards.  And because the standards are meant to apply to all disciplines, they suffer from being both too vague and too specific at the same time.

I also don’t find them very useful when it comes to convincing faculty members that their students need to learn information literacy skills.

On the other hand, the information literacy standards that come from disciplinary organizations like the ACS and the APA might actually be useful.

First, the faculty members might actually care about them.  Let’s be honest, when was the last time a faculty member was concerned about their students meeting the standards set out by the ACRL?  They are busy enough trying to meet their own standards and goals.

Secondly, because the disciplinary standards have been developed by faculty in the disciplines, they are more likely to align with the skills needed in those particular disciplines.  They are more likely to provide practical guidance about what to teach students, how papers and projects can be geared to meet the standards, and how this can be assessed.

Oh, and the disciplinary standards are typically shorter.

Am I abandoning the ACRL standards completely?  Probably not. But I would encourage librarians to make sure they are aware of any education related standards and outcomes set forth by disciplines they work with.  It might be useful.

Teaching students about retractions

July 22, 2011 issue of Science cover
July 22, 2011 issue of Science

I’ve been kicking around some thoughts lately about why and how to teach students about retractions and rebuttals of scientific papers.  I recently wrote a bit about researcher use (or lack thereof) of rebuttals, and NPR made me aware of this recent high profile retraction from Science of a prominent paper about the genetic component of longevity.  So the concepts have been floating around in my head for a bit, and as I brainstorm, it seems to come down to Who, What, Where, When, Why and How.

Who?  Either a faculty member or a librarian can teach students about retractions.  It might be easier for a faculty member to partner with a librarian instead of developing the class material from scratch.

What?  We teach students about peer review, about how the goal of the process is to make sure published articles contain sound methods and reasonable data analysis.  Perhaps we need to go the extra step and talk about the limitations of peer review.  The NPR story showcases a great example:

In an email, Science editor-in-chief Bruce Alberts points out that research papers are built on a wide variety of new, highly complex technologies. Finding a team of reviewers with all of the needed expertise is tricky. And how many reviewers are enough to be sure nothing slips through? The answer, Alberts says, is not always clear.

Criticisms of peer review are nothing new, and students should learn what peer review is good at, and what it’s bad at.

We can also talk about the less common cases of scientific fraud.

Importantly, we can also teach students strategies to determine if a paper they want to use has been retracted.  While some online systems make this clear, not all of them do.  Citation searching is one strategy that can help, and teaching students to do a complete literature review (not just taking the three first references you find) can be useful.

Where and When?  Students in introductory classes are just starting to figure things out.  What is a journal article?  Why can’t I use Wikipedia?  While it is tempting to introduce everything everything related to finding and using the scientific literature at once, I’m not sure it’s practical.  Perhaps peer review is introduced to students as sophomores, but we wait to discuss retractions (and science ethics) until junior or senior seminars?  Perhaps it is a good fit for a session in which citation tracking is discussed?

Why? Why take important class time to discuss these issues?  One of the main goals of undergraduate science education is to prepare students to think critically.  Neglecting this topic may give students the impression that peer review is sacrosanct, and discourage them from critically analyzing the methods and data analysis sections of papers they read.  In addition, students are learning to be scientists. Learning about the highs (publication) and lows (retraction) of scientific communication is an important part of their education.

How?  Class discussions about peer review and science ethics can lead naturally to a discussion of retractions and rebuttals.  Hands-on sessions focused on teaching students to search for primary research articles can include exercises focused on citation searching and include examples of retracted papers.  Controversial topics in science may provide term paper and project opportunities that allow students to research scientific disagreements.  Easy to read commentary on sites like Retraction Watch or news articles can provide students with the background needed to understand the issues.

What other strategies can we use to teach students about these concepts?

Update (7/27/2011): This excellent blog post over on Retraction Watch might be a useful reference or reading for students.  It discusses a recent article in the journal Science and Engineering Ethics that discusses why journal editors retract – or don’t retract – articles.

The really hard part about research isn’t the databases

This isn’t news to librarians, but I find that students (and occasionally faculty) get caught up in equating research with the research databases.

Take two recent examples.

This week I met with some math students who were looking for scholarly and non-scholarly articles and information to help them solve a particular mathematics problem.  They had been using MathSciNet and Google to help them find appropriate information, with mixed results.  I was able to provide them with a few technical tips on improving their searches (+ and – operators in Google, subject classifications in MathSciNet, etc.), but that wasn’t what they really needed.  What was most useful to them was an opportunity to think through all the different aspects of their problem.  In this case, I merely acted as a facilitator – my three semesters of calculus did not give me the required knowledge to help them come up with synonyms or alternative search terms.  But I could ask questions: What are the various aspects of your problem?  What are some alternative terms that would define these aspects?  What are the various approaches you’ve tried to solve the problem?

When their faculty adviser asked me to meet with them, he thought our discussion might focus more on the technical aspects of the databases.  He sat in on the session, and was vital in helping the students brainstorm their additional search terms.  I think he learned a bit about guiding students through the research process (plus a couple of tips about searching Google), and our collaboration really benefited the students.

In another case, I had a student who needed to write a paper describing some aspect of the effect of the hypothalamus on the pituitary gland.  For her, the massive quantity of information on this topic was overwhelming, and I was able to provide her with some guidance on how to focus her topic.  We looked at a Wikipedia article listed some specific hormones and their specific effects.  We looked at some search results lists and picked out a few topic ideas.  We talked about how she can use the background information she is collecting from textbooks to select a focus.  And only then did we talk about taking that focus back into the databases to find relevant information for her project.

Overall, she came away with a much better strategy for completing her project.  It wasn’t about “click here, then click there”.

At the same time, most of our faculty initiated requests for library instruction sessions start out with a database name.

I’m not saying that this isn’t important – it is – and there are some tricky technical issues involved in navigating our OpenURL system too.  I would make the argument that for many students, it is easier for them to learn a search interface on their own than it is to develop an overall strategy for completing the information gathering portion of their projects.

How will undergraduates navigate a post peer-review scholarly landscape?

With all of it’s flaws (and there are many), faculty at the moment can tell students to find articles that pass one crucial credibility test: peer review.

This is pretty easy for students to do, given a bit of instruction.  Many databases will indicate if something is peer reviewed (although they don’t always get it right), and most primary research articles are peer reviewed – you just need to be able to recognize one.

But peer review is changing.  It isn’t going away anytime soon, but through a variety of trials and experiments and advocacy, it is changing.  Cameron Neylon has argued in favor of doing away with the current peer review system altogether.

This may require a more informed readership, readers who understand what various metrics mean, and a greater reliance on understanding the general reputation of a journal (however this is measured).

All of this creates problems for your typical undergraduate.

When they are just starting out, students don’t have the required scientific knowledge of concepts and methods to adequately evaluate the quality of a journal article on their own – that’s what they are in college to learn.

So when their professors ask them to write a paper or complete a project using high quality primary research articles, how will students filter the signal from the noise if the simple “peer-reviewed” credibility test no longer works?

I can think of a few things that may help them out, although it won’t be quite as simple as it’s made to seem now. This may also require a bit more instruction to bring students up to speed on these concepts.

  • Use the databases as a filtering tool.  Databases like Scopus and Web of Science and SciFinder select which journals to include.  Theoretically, they wouldn’t include content from the really poor quality journals.  Of course, this doesn’t stop bad papers from appearing in good journals.  Faculty could limit students to articles found in a particular database.
  • Increased prevalence of article level metrics on publisher websites.  Some journals already make this information prominent (like PLoS ONE) and more are doing so.  This would require more education (for both faculty and students) about what these metrics mean (and don’t mean).  Faculty could ask students to only use articles that meet some minimum threshold.
  • An expansion of rating networks like Faculty of 1000.  We don’t have access to this resource at my institution, but we may see undergraduates relying more on this (and similar networks) to help them get a sense of whether an article is “worthy” or not.  Students could be limited to using articles that had a minimum rating.

All of this is limiting.  Hopefully, by the time students reach their senior year, faculty could stop making arbitrary requirements and simply ask for high quality material, right?

What are some other techniques for evaluating scholarship that undergraduates may have to master as peer review changes?

It isn’t just students: Medical researchers aren’t citing previous work either

One of the things that faculty often complain about is that students don’t adequately track down and cite enough relevant material for their term papers and projects.  This problem isn’t confined to undergraduates.  A study in the January 4, 2011 issue of the Annals of Internal Medicine by Karen Robinson and Steven Goodman finds that medical researchers aren’t doing a very good job of citing previous research either.

ResearchBlogging.orgSpecifically, Robinson and Goodman looked at reports of randomized, controlled trials to determine if the authors cited previous, related trials.  Citing previous trials is an important part of putting the results of the current trial in context, and in the case of medicine, may help save lives.

In order to do this study, the authors used meta-analysis to locate groups of related papers.  They reasoned that if the studies were similar enough to group mathematically, they were similar enough to cite each other.  The allowed for a 1-year gap between an original publication and a citation.

Overall, they found that only 25% of relevant papers were actually cited.

Why might a citation not be included?  I can think of a few reasons.

  • The authors couldn’t find the previous study
  • The authors found the previous study but didn’t think it was relevant enough to cite
  • The authors found the study and purposefully excluded it for some nefarious purpose

Robinson and Goodman seem to favor the first explanation most of all:

The obvious remedy – requiring a systematic review of relevant literature [before an RCT is funded] – is hampered by a lack of necessary skills and resources.

This obviously speaks to the importance of information literacy skills in both undergraduates and medical school students.  One of the most troubling things about the article results was Robinson and Goodman’s determination that a very simple PubMed search could locate most of the articles on one of the topics assessed.

An interesting recommendation that Robinson and Goodman repeat throughout the article is to suggest that a description of the search strategy for prior results be included in the final published article (and they follow their own advice in an appendix to the article).

Robinson and Goodman's search strategy to find the meta-analyses used to locate the randomized control trials

Of course, it is hard to believe that this problem is limited to just the authors of randomized control trials in biomedicine.  It wouldn’t take much to convince me that this problem exists throughout scholarly work, restricting the speed at which new discoveries are made.  I would bet that the problem can get particularly difficult in interdisciplinary areas.

We need to start with our undergraduates and convince them that it isn’t enough just to find the minimum number of required sources, but to really get at the heart of previous work on a topic.   This leads naturally into the topic of getting students to pick manageable project topics.  Of course, undergraduates like clear guidelines (and for the most part this is good teaching strategy), but upper level undergraduates should be able to handle the requirement that they find most of the relevant literature on a topic.

Robinson KA, & Goodman SN (2011). A systematic examination of the citation of prior research in reports of randomized, controlled trials. Annals of internal medicine, 154 (1), 50-5 PMID: 21200038

See also:

A librarian in a peer editing session

On Wednesday, I stayed a bit late after work to attend a peer editing session for a class I’ve been working with all semester.  This wasn’t in our original plan, but a few weeks ago it made sense that perhaps I could offer students some assistance with their citations during the same session where they were reviewing their peer’s writing.  We didn’t have a well thought out plan for my participation, but decided to give it a try.

It was an incredibly good use of my time.

Wikipedian Protester
Wikipedian Protester. From the wonderful web comic xkcd

I had previously reviewed the mechanics of citation with the students, as well as discussed some best practices of in-text citation:

  • When to cite
  • What doesn’t need to be cited (and what the lack of a citation implies)
  • How to use author’s names as the subject of a sentence (and avoid the passive voice)

In the peer editing session, I simply went from group to group looking at papers, making suggestions and answering questions.  Students asked a lot more questions than they would have if they had to seek me out (via email or in my office), and one student’s question would often help out another student.

Overall, I would do it again.  In classes where I know they do peer editing, I may volunteer to come in and help with citation related questions.