Teaching the Mechanics of Citation Styles

There are many, many parts of my job that I love.  Teaching students the mechanics of a citation style is not one of them.  I don’t mind teaching about many aspects of citations, including effective use of in-text citation, or even technology sessions on using tools like Mendeley.  But teaching the basic, “this is what an NLM article citation style looks like” is one of my least favorite parts of my job.

This is partly because I can completely sympathize with students when they complain about the preponderance of citation styles – it doesn’t make much practical sense.

It also probably because my style of teaching about citation styles isn’t very exciting.

My basic plan starts with a PowerPoint presentation in which I discuss the following:

  • Why we use specific styles – it isn’t just to annoy undergraduates, it is to facilitate clear communication among scholars.
  • Specific rules for articles, books or websites in the selected citation style – especially the bits that tend to mess with students
  • Resources to make all of this easier – “bibliography” output buttons in databases, references managers like Mendeley and Zotero

This is normally followed by an in-class practice session where they are given a sample article, website, book etc. and asked to create a citation.

I follow up with a homework assignment via the LMS in which I ask them to create a properly formatted citation for a resource they will use in an upcoming assignment.  I provide feedback and they should have at least one good citation for their project.

I believe that this information is useful to students, and the faculty who ask for such a session believe it is worth giving up class time, but it isn’t the most interesting.

So I put the question to the universe – what are some teaching strategies that can make the boring fundamentals of citation styles more engaging (to both me and my students)?

How will undergraduates navigate a post peer-review scholarly landscape?

With all of it’s flaws (and there are many), faculty at the moment can tell students to find articles that pass one crucial credibility test: peer review.

This is pretty easy for students to do, given a bit of instruction.  Many databases will indicate if something is peer reviewed (although they don’t always get it right), and most primary research articles are peer reviewed – you just need to be able to recognize one.

But peer review is changing.  It isn’t going away anytime soon, but through a variety of trials and experiments and advocacy, it is changing.  Cameron Neylon has argued in favor of doing away with the current peer review system altogether.

This may require a more informed readership, readers who understand what various metrics mean, and a greater reliance on understanding the general reputation of a journal (however this is measured).

All of this creates problems for your typical undergraduate.

When they are just starting out, students don’t have the required scientific knowledge of concepts and methods to adequately evaluate the quality of a journal article on their own – that’s what they are in college to learn.

So when their professors ask them to write a paper or complete a project using high quality primary research articles, how will students filter the signal from the noise if the simple “peer-reviewed” credibility test no longer works?

I can think of a few things that may help them out, although it won’t be quite as simple as it’s made to seem now. This may also require a bit more instruction to bring students up to speed on these concepts.

  • Use the databases as a filtering tool.  Databases like Scopus and Web of Science and SciFinder select which journals to include.  Theoretically, they wouldn’t include content from the really poor quality journals.  Of course, this doesn’t stop bad papers from appearing in good journals.  Faculty could limit students to articles found in a particular database.
  • Increased prevalence of article level metrics on publisher websites.  Some journals already make this information prominent (like PLoS ONE) and more are doing so.  This would require more education (for both faculty and students) about what these metrics mean (and don’t mean).  Faculty could ask students to only use articles that meet some minimum threshold.
  • An expansion of rating networks like Faculty of 1000.  We don’t have access to this resource at my institution, but we may see undergraduates relying more on this (and similar networks) to help them get a sense of whether an article is “worthy” or not.  Students could be limited to using articles that had a minimum rating.

All of this is limiting.  Hopefully, by the time students reach their senior year, faculty could stop making arbitrary requirements and simply ask for high quality material, right?

What are some other techniques for evaluating scholarship that undergraduates may have to master as peer review changes?

Miscellaneous things I’ve learned lately

  • Genetics researchers at undergraduate institutions can have a hard time finding projects that are interesting enough to get funding, but not so interesting that a larger lab swoops in to do the work.
  • Our students got more books through ILL last year than they checked out from our own collection. (Special thanks to the IDS project for getting these books to students in 5 days on average)
  • Two of the undergraduates I’ve taught in information literacy sessions are interested in science librarianship.
  • This quote: “Facts are not science – as the dictionary is not literature.” -Martin Fisher, Fischerisms
  • My two year old will eat cucumbers, just not the skin
  • My organization is switching from Oracle calendar to Google calendar (Yay!) but we will have to run two systems simultaneously for a little while.

Talking with faculty

Over at Confessions of a Science Librarian, John Dupuis has set out a delightful “Stealth Librarianship Manifesto” that echoes many of the comments I have made about how librarians need to get out of the library (physically and virtually) and interact with our users in their spaces, including conferences and publications.

At my library, we are currently working through a big project to help us do that.  We have a relatively new “scholarly communications” team and our goal over the next 6 months or so is to talk to faculty members across campus to learn about what they are doing.  I’ve mentioned this project before, and noted that there are some resources available to help folks understand various disciplines.  It is vitally important for us to understand what is going on on our campus.  Our faculty are amazing, but they have different pressures than the folks at research universities.

So every week I meet with two or three faculty from the disciplines I serve and chat with them about the research and publication efforts:

  • What are they working on right now?
  • Are they incorporating undergraduates into their research?  Have they co-authored publications with these students? (Quite often)
  • How do they select which journal to publish in?  Do they pay attention to impact factors or not? (Although my faculty pay attention to general reputation, they rarely mention the metrics)
  • Have they posted a copy of one of the publications online?  Do they know if they kept the right to do so? (They have no idea what rights they have to their papers)
  • What kinds of data are they producing?  What do they do with it? (I’ve already learned a lot about the distinctions between the theorists and the applied folks in math and computer science)

The conversations I have had so far have been incredibly interesting and educational.  I service 6 departments (Biology, Chemistry, Computer Science, Geological Sciences, Mathematics, and Physics & Astronomy).  My educational background is in Geology, so I don’t have a native understanding of what the mathematicians or physicists are doing, for example.  These conversations have given me remarkable glimpses into our faculty’s values, assumptions and goals.

One of the important distinctions I’ve noticed is the disconnect between the highly active science online community (bloggers and tweeters, etc.) and your average run of the mill faculty.  Scholarly communication may be changing, but many of the faculty I’ve talked with (including those who are still publishing actively) are barely aware of some of the fascinating changes and experiments taking place.

So far, I’ve only had a chance to talk with 13% of the faculty I work with, and an upcoming maternity leave will delay my conversations with some, but it has been an incredible experience so far, and I look forward to the rest.

It isn’t just students: Medical researchers aren’t citing previous work either

One of the things that faculty often complain about is that students don’t adequately track down and cite enough relevant material for their term papers and projects.  This problem isn’t confined to undergraduates.  A study in the January 4, 2011 issue of the Annals of Internal Medicine by Karen Robinson and Steven Goodman finds that medical researchers aren’t doing a very good job of citing previous research either.

ResearchBlogging.orgSpecifically, Robinson and Goodman looked at reports of randomized, controlled trials to determine if the authors cited previous, related trials.  Citing previous trials is an important part of putting the results of the current trial in context, and in the case of medicine, may help save lives.

In order to do this study, the authors used meta-analysis to locate groups of related papers.  They reasoned that if the studies were similar enough to group mathematically, they were similar enough to cite each other.  The allowed for a 1-year gap between an original publication and a citation.

Overall, they found that only 25% of relevant papers were actually cited.

Why might a citation not be included?  I can think of a few reasons.

  • The authors couldn’t find the previous study
  • The authors found the previous study but didn’t think it was relevant enough to cite
  • The authors found the study and purposefully excluded it for some nefarious purpose

Robinson and Goodman seem to favor the first explanation most of all:

The obvious remedy – requiring a systematic review of relevant literature [before an RCT is funded] – is hampered by a lack of necessary skills and resources.

This obviously speaks to the importance of information literacy skills in both undergraduates and medical school students.  One of the most troubling things about the article results was Robinson and Goodman’s determination that a very simple PubMed search could locate most of the articles on one of the topics assessed.

An interesting recommendation that Robinson and Goodman repeat throughout the article is to suggest that a description of the search strategy for prior results be included in the final published article (and they follow their own advice in an appendix to the article).

Robinson and Goodman's search strategy to find the meta-analyses used to locate the randomized control trials

Of course, it is hard to believe that this problem is limited to just the authors of randomized control trials in biomedicine.  It wouldn’t take much to convince me that this problem exists throughout scholarly work, restricting the speed at which new discoveries are made.  I would bet that the problem can get particularly difficult in interdisciplinary areas.

We need to start with our undergraduates and convince them that it isn’t enough just to find the minimum number of required sources, but to really get at the heart of previous work on a topic.   This leads naturally into the topic of getting students to pick manageable project topics.  Of course, undergraduates like clear guidelines (and for the most part this is good teaching strategy), but upper level undergraduates should be able to handle the requirement that they find most of the relevant literature on a topic.

Robinson KA, & Goodman SN (2011). A systematic examination of the citation of prior research in reports of randomized, controlled trials. Annals of internal medicine, 154 (1), 50-5 PMID: 21200038

See also: