One of the things that faculty often complain about is that students don’t adequately track down and cite enough relevant material for their term papers and projects. This problem isn’t confined to undergraduates. A study in the January 4, 2011 issue of the Annals of Internal Medicine by Karen Robinson and Steven Goodman finds that medical researchers aren’t doing a very good job of citing previous research either.
Specifically, Robinson and Goodman looked at reports of randomized, controlled trials to determine if the authors cited previous, related trials. Citing previous trials is an important part of putting the results of the current trial in context, and in the case of medicine, may help save lives.
In order to do this study, the authors used meta-analysis to locate groups of related papers. They reasoned that if the studies were similar enough to group mathematically, they were similar enough to cite each other. The allowed for a 1-year gap between an original publication and a citation.
Overall, they found that only 25% of relevant papers were actually cited.
Why might a citation not be included? I can think of a few reasons.
- The authors couldn’t find the previous study
- The authors found the previous study but didn’t think it was relevant enough to cite
- The authors found the study and purposefully excluded it for some nefarious purpose
Robinson and Goodman seem to favor the first explanation most of all:
The obvious remedy – requiring a systematic review of relevant literature [before an RCT is funded] – is hampered by a lack of necessary skills and resources.
This obviously speaks to the importance of information literacy skills in both undergraduates and medical school students. One of the most troubling things about the article results was Robinson and Goodman’s determination that a very simple PubMed search could locate most of the articles on one of the topics assessed.
An interesting recommendation that Robinson and Goodman repeat throughout the article is to suggest that a description of the search strategy for prior results be included in the final published article (and they follow their own advice in an appendix to the article).

Of course, it is hard to believe that this problem is limited to just the authors of randomized control trials in biomedicine. It wouldn’t take much to convince me that this problem exists throughout scholarly work, restricting the speed at which new discoveries are made. I would bet that the problem can get particularly difficult in interdisciplinary areas.
We need to start with our undergraduates and convince them that it isn’t enough just to find the minimum number of required sources, but to really get at the heart of previous work on a topic. This leads naturally into the topic of getting students to pick manageable project topics. Of course, undergraduates like clear guidelines (and for the most part this is good teaching strategy), but upper level undergraduates should be able to handle the requirement that they find most of the relevant literature on a topic.
Robinson KA, & Goodman SN (2011). A systematic examination of the citation of prior research in reports of randomized, controlled trials. Annals of internal medicine, 154 (1), 50-5 PMID: 21200038
See also:
- Trial in a Vacuum: Study of Studies Shows Few Citations from the New York Times
- Citation-amnesia paper published from ScienceNews
You missed an obvious possible cause: the later researchers didn’t have access to the earlier papers. I would expect a clinical setting to be even more subscription-impoverished than a university.
Dorothea – you are exactly right. Of course, that opens up a whole new can of worms regarding the importance of access!
I agree with Dorothea. It’s not enough simply to find relevant sources, but to make an honest citation one should able to skim more than the abstract. Coming across the publication, particularly if it is in a more specialized or obscure source may be difficult enough. Getting the article, if you do not have access to a library that has subscription will be prohibitively expensive. Finally, even if you have access to recent articles, those written prior to online publication are even more painful to get hold of. This gets at the broader issue of science access. A pet issue of mine is what to do about the lack of it. Until the recent advent of open-access publications, scientific literature has been out of the reach of many researchers outside of well-funded institutions, let alone the public. Even now, open-access does not have the degree of coverage that traditional publications do.
Thanks Shermin – This gets into an area that I have almost no knowledge of. Academic libraries, with a tradition of ILL (interlibrary loan) can often (almost always?) obtain articles that we don’t have subscriptions to.
I have no idea how this works for researchers associated with hospitals or other institutions with small or non-existent libraries. Do those libraries participate in ILL? (Can they afford to?) Do they have the staff to make this efficient?
I suppose the next question would come down to identifying what went wrong – did the researchers fail to identify an article or did they fail to access it (for whatever reason)?
I am a little shocked at the discussion regarding this article. Academic libraries have ILL services that can obtain articles at a reasonable price if not for free (because of agreements with other libraries). Clinicians who do not have access to a library should look into Loansome Doc (https://docline.gov/loansome/login.cfm)a National Library of Medicine tool which can be used with PubMed (http://www.ncbi.nlm.nih.gov/pubmed/) to order articles. There is a National Network of Libraries (http://www.nlm.nih.gov/network.html) who work with health organizations and offer a system to obtain articles at a reasonable price.
Also, PubMedCentral (http://www.ncbi.nlm.nih.gov/pmc/)now has many articles for free due to the requirement that all government funded research published must submit their articles to PubMedCentral. Here’s the NIH public access policy page http://publicaccess.nih.gov/.
Every researcher should make friends with their local librarian who can help them properly cite articles they use and find articles or relevant information to support their research!
Agree about the very serious problem of only being able to access abstracts for free (in the majority of cases). Basic scientific knowledge must not be locked up behind exorbitant, rent seeking paywalls.
Also not happy with many abstracts themselves. I have read far too many (including in otherwise reputable mainstream journals) that seriously distort and misrepresent the contents of the full paper, including directly contradicting it. Unbelievable, I know, peer review is supposed to weed this shit out, but it happens frequently and is a grossly under appreciated problem in science. So relying on the abstract is not a good way to do reliable research.
If I could make only one change to the way science is currently done, it would be to require all peer reviewed scientific papers to be published in full, and for free.
(Obviously funding for the review and publishing process has to come from somewhere, but that can be built into the original funding grants, supplemented by one-off government grants to make all older pre internet era papers available too.)
Just my 2 cents.
Another factor you haven’t mentioned is the tendency of many journals to restrict the number of citations that authors are allowed. It saves space but doesn’t encourage scholarship. Would be interesting to check the journals where the low-citation papers appeared.
The authors of this study actually considered this option but rejected it as an unlikely cause associated with the overall trend: “The possibility that journal space limitations are causing this lack of citation seems unlikely. We find it implausible that authors are being forced to limit themselves to 2 or fewer of their most critical citations by page or reference list limitations.”
This is particularly interesting to me: “An interesting recommendation that Robinson and Goodman repeat throughout the article is to suggest that a description of the search strategy for prior results be included in the final published article (and they follow their own advice in an appendix to the article).” Given the PRISMA guidelines for writing systematic reviews and meta-analyses and the AMSTAR tool for reading/assessing them, it doesn’t seem too far a stretch to adapt these to reports of experimental research as well. I teach mostly undergrads, and haven’t gone so far as to have them “chart” their search process, but it seems like a very good idea, and perhaps as valuable for long-term medical literacy as the writing assignment itself. I’ve begun using the basic assessment questions common in the EBM literacy modules to help students analyze research… developing a search-specific reporting instrument would be valuable (ooh, and we could call it “SSRI” and confuse the heck out of everybody!).