With all of it’s flaws (and there are many), faculty at the moment can tell students to find articles that pass one crucial credibility test: peer review.
This is pretty easy for students to do, given a bit of instruction. Many databases will indicate if something is peer reviewed (although they don’t always get it right), and most primary research articles are peer reviewed – you just need to be able to recognize one.
But peer review is changing. It isn’t going away anytime soon, but through a variety of trials and experiments and advocacy, it is changing. Cameron Neylon has argued in favor of doing away with the current peer review system altogether.
This may require a more informed readership, readers who understand what various metrics mean, and a greater reliance on understanding the general reputation of a journal (however this is measured).
All of this creates problems for your typical undergraduate.
When they are just starting out, students don’t have the required scientific knowledge of concepts and methods to adequately evaluate the quality of a journal article on their own – that’s what they are in college to learn.
So when their professors ask them to write a paper or complete a project using high quality primary research articles, how will students filter the signal from the noise if the simple “peer-reviewed” credibility test no longer works?
I can think of a few things that may help them out, although it won’t be quite as simple as it’s made to seem now. This may also require a bit more instruction to bring students up to speed on these concepts.
- Use the databases as a filtering tool. Databases like Scopus and Web of Science and SciFinder select which journals to include. Theoretically, they wouldn’t include content from the really poor quality journals. Of course, this doesn’t stop bad papers from appearing in good journals. Faculty could limit students to articles found in a particular database.
- Increased prevalence of article level metrics on publisher websites. Some journals already make this information prominent (like PLoS ONE) and more are doing so. This would require more education (for both faculty and students) about what these metrics mean (and don’t mean). Faculty could ask students to only use articles that meet some minimum threshold.
- An expansion of rating networks like Faculty of 1000. We don’t have access to this resource at my institution, but we may see undergraduates relying more on this (and similar networks) to help them get a sense of whether an article is “worthy” or not. Students could be limited to using articles that had a minimum rating.
All of this is limiting. Hopefully, by the time students reach their senior year, faculty could stop making arbitrary requirements and simply ask for high quality material, right?
What are some other techniques for evaluating scholarship that undergraduates may have to master as peer review changes?
I agree that “peer-reviewed” is not necessarily the best criteria for helping undergrads find appropriate resources for research. While I see how it is sort of a short-hand for a whole range of criteria, it really doesn’t address what the purpose is. At least in transportation, where lots of credible, high quality primary research comes from the government and professional associations that aren’t considered peer-reviewed.
This reminds me of undergrads being forced to use “paper” resources as well.
Another resource will be “critical thinking” processes native to specific disciplines. I teach mostly undergrad premeds, and have recently begun using a couple of pages from the EBM tutorial module by Duke/NC (cc-licensed, of course!). At this point, I am only interested in the EBM pyramid and 5 questions they recommend for assessing scientific validity — we don’t go into creating clinical questions and all that! But at least it provides an intro to the sort of thinking they’ll need to use as health professionals and another way to assess the literature they encounter.