Most of the students (and some of the faculty) I work with have no idea what a DOI is or why they should care. This is what I tell them.
A DOI – Digital Object Identifier – is like a social security number for a journal article. They can be applied to other digital items as well, but you are most likely to encounter them in scholarly articles.
A DOI normally consists of numbers, letters and other punctuation. It will look like this:
The DOI provides a way to permanently find a particular item. Publishers and scholarly societies change their websites all the time. Recently, a major publisher completely re-did their website, messing up all links into their site. I was quite annoyed. But the DOI could still link you to an article in a way that a URL couldn’t.
Incidentally, you can use the DOI to create a nice, neat compact URL for a journal article (instead of those really log URLs provided by some databases). You just need to add a little bit to the front of the DOI:
This week, the University of California announced a possible boycott by faculty and researchers of Nature Publishing Group. UC felt they had to act after NPG was proposing a 400% increase in the UC site license subscription cost. The proposal is for faculty to stop submitting articles to the journal, stop reviewing articles, resign from editorial boards, etc., in addition to canceling subscriptions to NPG journals.
fight club soap from Bethany Nowviskie at nowviskie.org – a great article by a humanities scholar linking the recent UC/NPG issue, the serials crisis, the decline in monograph purchases by libraries, and the folding of university presses.
Is this the start of something? I emailed the Chronicle of Higher Education article to the faculty listserv at my institution, and one person actually responded saying ‘thank you’ for sending along the article. The library is just now formulating plans for an Open Access Week event on Campus – could this raise faculty awareness of some of these issues?
Is this what was needed to bring the problems with the scholarly research and publication economy to light? Or will all of this be forgotten by the time the fall semester starts?
A new open access journal, Communications in Information Literacy, recently published an article about assessing library instruction for first year students. The paper caught my eye because I’m working on some similar things here at Geneseo.
The study sought to determine if students’ information literacy skills and confidence with research improved more with a greater number of librarian-led information literacy sessions. The author used a pre-test and a post-test to examine students’ attitudes and stated behaviors. She used likert-style questions to assess students’ previous use of information sources and their confidence with various information related tasks. One group of students received the typical one-shot information literacy session in a first year writing and critical thinking class. Another group received two or three information literacy sessions over the course of the semester.
The author is very clear about outlining the challenges we all face in trying to assess information literacy instruction. Most notably, it is almost impossible to control for the wide variety of variables that have an impact on student information literacy skills:
Prior information literacy instruction in high school or other venues
Prior practice doing scholarly research
Student intelligence and creativity
Opportunity to practice skills learned in an information literacy session (and differences in the assignment requirements)
Differences in scholarship between various disciplines
Some information related to the factors listed above is relatively easy to obtain (although perhaps not so easy to quantify). Course faculty can be a source of information about assignment requirements, and will set the standards for more or less practice information literacy and research skills.
On the other hand, getting information about prior instruction and practice normally relies on students’ self reporting, which is not always accurate.
In addition to the likert-style attitudinal questions, the author analyzed student bibliographies. She looked at the different types of sources used, and whether they were available through the library or through other sources.
The latter question is challenging. Typically, a student doesn’t need to use a library database to access the full text of articles if they are on the campus network. As a result, they could easily have used one of many search engines and not even realized they were using library resources. On the other hand, use of library databases that resulted in articles requested through Interlibrary Loan would not count as library sources. We emphasize ILL at my institution, however, so perhaps it isn’t used as much at other institutions.
All of this begs the question – are the information literacy sessions we teach an effective way of teaching students research skills?
The author of this paper concludes that there is some positive benefit to the increased number of information literacy sessions, although the data seem a bit more mixed to me.
I wish that the author had actually tested students research skills. While it may be much more difficult to evaluate, student confidence does not necessarily correlate with student skills.
One of the challenges of being a small library is that we cannot collect or keep everything. We don’t have the space to keep things “for historical purposes.” We would like to keep acquiring materials, and this means that we have to find space somewhere. This means weeding materials from our collection (aka, recycling books and journals).
Outside factors can make these space pressures more acute – in the summer, our library will be kicked out of one of our on-campus storage locations (the building will eventually be demolished). We will be simply moving some of the materials to another location, but other materials will be withdrawn from the collection to make room.
I have been working on several projects to make this possible.
First, I have been looking at our print indexes to see what can be withdrawn. Sometimes this is an easy decision: We can withdraw the print versions of Chemical Abstracts and the Bibliography of North American Geology because we have subscriptions to their electronic versions, and those subscriptions won’t be going away any time soon. I know that some librarians will say “but those print indexes are valuable learning and research tools – it’s easier to use the electronic version when you know how to use the print.” To be honest, I’m not sure that I agree with this statement, especially since the electronic tools offer so many more options for finding information. In any case, we simply can’t afford to hang on to them.
The next items to go are low use print journal volumes that we have stable electronic access to. This is a bit more complicated, because what does “stable” access mean? Publishers who have made certain journal volumes open access now could always take away that access in the future. What happens when we can no longer subscribe to online access for a journal? These decisions were made on a individual basis.
At the same time I am looking at our collection of USGS documents with our government documents librarian. Our collection is a bit odd. Much of it was never entered into our OPAC, so we don’t have a complete sense of what we have. Much of it is now available online, but access is a bit dodgy for students used to clicking on the open url resolver button in GeoRef (which doesn’t work as we’d like for these documents). I’ll be meeting with our geology faculty in the next couple of weeks to develop a plan for these documents – I suspect we will withdraw items that are available online.
And finally we have a large collection of books in the storage location that is being closed. These books were moved out of our main library 5 years ago, based on their previous usage. At the time, these books hadn’t been used (checked out or taken off the shelf) in 10 years. Students could still check them out by requesting them from storage. If they were requested they were put back in our main library. So right now, the books in storage haven’t been used in 15 years. Most of them will be withdrawn. I plan to have a quick look through them, however, since they were moved before I arrived here.
Of course, one of the trickiest parts of this is communicating this with the faculty. The most visible part of all this is the big recycling dumpster into which all of these volumes are thrown. It isn’t pretty. And you don’t want that dumpster to be the first clue to faculty about what is going on. So I have been trying to communicate with faculty about what we are doing. Sometimes I am asking for advice, sometimes I am simply informing them of our decisions. And I haven’t always done this in the best way possible. For example, I didn’t give the faculty a lot of notice about the Spring Break withdrawal of indexes. At this point the only thing I can do is make a plan to communicate with faculty about the next phases.
So I will send emails and request face-to-face meetings with our department representatives over the next couple of months in an effort to be open with the faculty about our decisions. Hopefully they will still see me as an advocate for science resources in the library.
Publishing journal articles online opens up a wide variety of options: hyperlinking references, including video and audio, archiving data along with the article, etc. (You can see some ideas about future scientific articles from Elsevier and Cellhere). Most of these options are not normally exercised, and most users still view journal articles as online PDF’s, which they then either save or print.
Sometimes these PDF’s including an often annoying page at the front or back re-stating copyright information or indicating that the material was downloaded through a particular institutions subscription.
Just today, I downloaded an article from an August issue of Science and was pleasantly surprised that this ‘cover page’ actually included some useful information. In addition to providing the normal article metadata, the links provided may actually be useful, at least to those with a subscription.
I especially noted the first item in the list of links informing readers that there had been a correction (in this case a relatively minor correction to a figure), and links to articles cited by this paper, including those articles available for free.
I wondered if a similar method was used when a paper was retracted. A brief search turned up the PDF of a retracted paper published in 2006 and retracted in 2007. Across the first page of the article in red letters was printed:
At the end of the PDF of the 2006 article was the text of the “Editorial Expression of Concern” published 7 months later, and the official retraction of the paper published 9 months after that.
So here, in one PDF document, we have the history of this paper.
This is vital for the undergraduate students I serve. Without this, a student would have no idea that an article had been retracted for any reason. This is just one more tool to help novice scientists get into the world of their scientific disciplines.
Like many academic libraries, our library budget has recently been cut. Last year, we dealt with the cuts by severely cutting our book budget and our student employee budget. This year, the additional budget cuts made some journal cancellations necessary. Most of our cuts involve canceling individual print or online subscriptions if we already had access via an aggregator (like ProQuest or Ebsco). We have (thankfully) been able to almost completely make up our budget gap without loosing access to any content.
As a result of examining our journal subscriptions, the opportunity arose to subscribe to the full text geology resource GeoScienceWorld. If we canceled individual subscriptions to journals that were available in GeoScienceWorld, we could almost make up the difference. We would need to cancel a couple of other journals in order to make this feasible.
I thought that the increased content and ease of use provided by GeoScienceWorld made this a good move, and I took the question to the Geosciences department.
I created a list of individual journals we would need to cancel (that would be available GeoScienceWorld). I created a list of journals that we would gain access to. I also created a list of suggested cancellations we would have to make in order to make up the difference. In creating my list of suggested cancellations, I worked hard to minimize the loss of content.
In a meeting with the Geosciences library representative and department chair, they were very receptive to my plan, even suggested some cancellations that I assumed would be ‘off limits’.
Overall, they understood the budget pressures – their departmental budget had also been cut – and they appreciated the fact that this decision was being left (mostly) in their hands. The department chair was going to bring the information back to the department for a final decision – making sure that all the faculty are in the loop.
I am pleased with the communication between the library and the faculty on this issue, and I will try to use this example as a model in future decisions regarding departmental resources.
On a weekly basis, a new article or editorial comes out discussing the shifting paradigm of how scientists communicate with one another. According to many, the journal article – the mainstay of scientific communication – is about to undergo a major metamorphosis as blogs and new journal concepts affect how science is done. A recent report from the Science Online London 2009 conference exemplifies this.
I am very excited about these changes, and I spend some of my time checking out real-time science blogs like Useful Chemistry, participating on online science networks like Nature Network, and exploring what PLOS ONE has to offer.
But how relevant are all of these new changes to the average undergraduate? Do they need to know about them? If they don’t need to know now, will they in the near future?
Most of the writing assignments I’m seeing are still asking students to find traditional scholarly articles as the only sources for their papers. Most of the faculty at my small undergraduate institution are still very traditional with regard to scholarly communication. A (very) few faculty still have to be convinced that an online journal is acceptable, and I wrote an email a few months ago explaining that PLOS Medicine is a highly regarded journal.
Until a consensus develops around what is scholarly and what isn’t in the online world, how are undergraduate students (who still need help telling apart a review article and a piece of original research) supposed to navigate these on-going changes?
In the short term, I don’t think that undergraduates need to know a lot about these developments, beyond their own personal interest in science blogs or online science news. For the time being, a science student can successfully navigate his or her undergraduate education without an awareness of the scientific blogosphere or the concept of open science.
As much as I would love to share my excitement of all of these fascinating changes, I don’t think students need to know about them. At the moment, I teach students about the basic differences between review articles, primary research articles and news articles.
In the future I will probably talk about blogs and social networks and how to access primary data sets – I’m looking forward to it.
I recently read a very interesting article in PLoS ONE examining various measures of the scientific importance of particular journals:
Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A Principal Component Analysis of 39 Scientific Impact Measures PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0006022
The article isn’t breaking new ground in its criticism of the impact factor, calculated by Thomson Scientific. However, the statistical analysis comparing multiple measures of importance sheds new light on the relationship between the various measures.
The authors analyzed 39 different impact measures that fall into two main groups: those that look at citation counts and those that look at online usage data (page views and downloads). A few additional measures that take account of online social networks were also included.
In general, the usage measurements cluster more closely together than the citation measurements – they are measuring approximately the same thing.
As a result of this analysis, they were able to differentiate measures that looked at immediate (“rapid”) use vs. longer term use (“delayed”), and to distinguish measures that look at how popular a resource is vs. how prestigious a resource is.
All of this leads us to repeat the problem posited by the authors: we don’t have an accepted definition of what “impact” really is.
Publications, institutions, and tenure committees all have different needs and requirements. For example, the faculty at one large research institution may be more concerned about prestige, while another may need to market their programs and examine their popularity. I think this analysis shows that folks can and should be a bit more choosy when selecting the measure they use to judge their competitors, their research, and their colleagues.