Philosphical Transactions is typically regarded as the first scientific journal and has been in continuous publication since it started in 1665. (A french journal, the Journal des sçavans started publication three months prior to the Philosophical Transactions, but since it appealed to a wider audience and included a larger percentage of book reviews, many do not consider it the first real scientific journal).
We’ve had access to this archive for a while now via JSTOR, and I love having the ability to see the very beginnings of the scientific journal article.
What intrigued me when I started digging into their now-open archive was the delightful juxtaposition of the 1665 publication date and the modern DOI.
Since these historical documents are available online, they are digital objects, and assigning DOIs makes a lot of sense. It also makes each individual article much easier to find.
Tracking down a citation you already have should be a relatively simple task.
A colleague of mine asked for help the other day tracking down a citation. A variety of circumstances made it anything but straightforward and served to remind me about some of the confusing parts of the scholarly communication system (and that I really love my job).
A student had approached the reference desk looking for a citation to this article:
Tan, D. X.; Chen, L. D.; Poeggeler, B.; Manchester, L. C.; Reiter, R. J. (1993). “Melatonin: a potent, endogenous hydroxyl radical scavenger”. Endocrine J 1: 57–60.
The student had found the citation via the Wikipedia entry for Melatonin. My colleague started out with the usual process – look up the journal, find the right volume and go from there. Except when you look up Endocrine Journal, you find that the volume number doesn’t match the year, nor are there any articles with a similar title in the publication. Author searches in the same journal also yield nothing.
Since the citation came from Wikipedia, it’s seemed probable that there was an error. So she did a search on Google and Google Scholar to try to find a correct citation. Neither search turns up the article, but Google Scholar indicates that the article has been cited over 1000 times! The student found another article by some of the same authors on the topic and was content, but my colleague still wanted the answer. With other students waiting for reference help, she sent the question along to me.
I was checking my email after my kids went to bed and thought I’d poke around a little to see what I can find. I re-did the searches my colleague did so that I understand the problem. Theoretically, the article has to exist, since it has been cited so many times. So why couldn’t we find it? I tried Google Scholar, PubMed, Scopus and found nothing (we don’t have Web of Science here). I searched for additional publications by the same authors but I still didn’t find anything close to this one.
So I started looking for similarly name publications. The journal Endocrine Journal is published by the Japan Endocrine Society and the years don’t match up, so perhaps the abbreviation refers to something different? I located a journal called simply Endocrine (try finding that one in a Google search!) published by Springer. This started to look promising because the first volume of of Endocrine was published in 1993, just want we want. But this volume isn’t available on the publisher’s website, so I couldn’t confirm my suspicions.
If Endocrine is the journal we want, why can’t I find it indexed in a database? I checked indexing information. PubMed only started indexing it in 1997. Scopus started indexing it in 1993, but only with the fifth issue, and we need issue 1. And Google Scholar won’t have it (other than the citation) because it isn’t on the Springer website or in PubMed.
I start to think that the citation really refers to an article in Endocrine, not Endocrine Journal. But Scopus has over 1000 folks citing Endocrine Journal. It seems unlikely that so many people would make the same error.
I stayed up past my bedtime having fun tracking this down. I emailed my thoughts to my colleague and I wondered if perhaps Web of Science indexed this item from issue 1.
The next day, we asked a colleague at another institution to do a quick search for us in Web of Science. No hits on the article title. Perhaps Web of Science didn’t index it from issue 1 either, or perhaps I’m just wrong (it’s been known to happen).
Then I checked Ulrich’s guide to periodicals. We have it in print here, and the brief entry illustrates the missing piece of our puzzle.
The entry in Ulrich's clearly indicated this journals former title, a fact that is missing from the publisher's website.
From 1993 to 1994 there were two Endocrine Journals!
For a brief period of time (<2 years), Endocrine called itself Endocrine Journal. Perhaps they discovered the Japan Endocrine Society’s Endocrine Journal as the internet was making international collaboration easier.
Since I found the original ISSN (0969-711X), I submitted an ILL request to confirm my thoughts. Sure enough, here’s the article masthead, but with Macmillan Press as the publisher, not Springer. The early issues available on the Springer website have Stockton Press as the publisher in 1995. It seems to have changed publisher several times.
Screen shot of the PDF file I received via ILL. Note the publisher at the top.
What’s the moral of this story?
Journals really need to select unique names. (Do new journals think about Google-ability of their names?)
I picked the right profession because I had fun chasing this down.
Given my difficulty tracking this down, I have to ask: How many of the 1000 folks that cited this article actually tracked it down? I bet there are some who never laid eyes on it.
More importantly, it can be very easy for valuable information to disappear entirely. We live in an era of information overload. Yes, people have been saying the same thing since the invention of the printing press, but these days it isn’t a matter of finding any information, it is a matter of sorting to find the right information. And even today, an item published just before the explosion of online scholarly information could almost disappear. Although it may seem like it, not everything is available in Google.
I recently posted an entry on my library’s blog about some iPhone apps that might be useful to undergraduates for research and collaboration. I thought some readers here might find the same information useful.
Check out the following free apps to help you search the literature, cite your sources, and organize your work.
iPhone apps for research and collaboration
Ebsco Databases – Ebsco provides access to a large number of databases via one app (ERIC, Georef, American History and Life, MLA International Bibliography, Business Source Complete, Academic Search Complete and lots of others). Because access to these databases is paid for by the library (with your tuition dollars), you need to log in to Academic Search Complete via your library’s website first. At the bottom of the screen you’ll click on a link that will send an email with an activation code. After downloading the app, open your email on your phone and click on the link. You will then have 9 months of access. I’ve found this process to be pretty simple and easy – no need to log in every time. The app will connect you to full text articles within the Ebsco databases, and even Geneseo’s “Get it” service (our version of OpenURL) for articles found elsewhere.
SciVerse Scopus Alerts – A search app for the interdisciplinary database Scopus. I’ve reviewed this app before, and there hasn’t been a major update since then. This app can do keyword searching, citation tracking, and alerts for the science and social science literature. Scopus is an outstanding database, but the app has some issues. The biggest problem is getting it to work. You need to remember your Scopus username and password (not your Geneseo username), and even then there can be trouble. While the tech support is responsive, it just isn’t as easy to get started as the Ebsco app above.
Evernote – I recently started using this piece of software on my computer for note taking during meetings and lectures, lesson planning and writing. I am in love with its simplicity and universal usefulness. Take class notes on your computer, then download the iPhone app to access them anywhere. Record voice notes on your phone and automatically sync them to your laptop. Take pictures with your phone and insert them into the notes you’ve already started, or start a new note. The iPhone app syncs with the desktop application so that you never have to guess where a certain piece of information is. Share notes with others via shared notebooks or simple weblinks. I love this app.
Dropbox – Along with the Dropbox website, this tool allows you to easily share files among friends (with shared folders), or between your computer and phone.
EasyBib – An app from the popular website. This app allows you to scan the barcode of a book and create a formatted citation (which you will, of course, check against the style manuals for accuracy).
Merriam Webster Dictionary – There are lots of dictionary apps out there. This one is free, and has a nifty voice search function.
Mendeley – This app works with Mendeley Desktop and the Mendeley website. It allows you to store and organize your PDF journal articles and book chapters. It’s like iTunes for journal articles: Mendeley will organize your folders for you and you can create folders (playlists) of articles. You can share those folders with others to help you collaborate on group projects. The desktop version integrates with Microsoft word to help you cite your sources. This mobile app allows you to access the journal PDFs you have synced to the web, as well as the ability to search your personal library. This is one of a few applications that is always open on my laptop, and I love the ability to quickly look things up on my phone when I am away from my computer.
Since I don’t have an Android phone, I can’t comment on the availability or usability of these apps on that platform. Perhaps in another post.
Along with every other department on campus, libraries are under increasing pressure to evaluate their services – everything from student learning outcomes to expenditures.
Are your reference statistics a comedy or a tragedy? Image courtesy of Flickr user aepoc
Assessing the value of libraries in these areas requires the collection of lots of information. Data of all kinds needs to be collected, analyzed and shared. So what data do we collect, and where do we store it?
We have lots of silos for relevant information here in my library, and none of us are convinced that we are doing things in the best way possible. Our collection of statistics related to reference and research help services provides one example.
The most obvious place where this happens is the reference desk. To keep statistics about what happens here we use LibStats to record:
the question itself,
the format (phone, walk-up, IM)
the patron (student, community member, faculty member)
how long it took to answer
But our research help doesn’t end at the reference desk. One of the big services we provide is research consultations by appointment. Students (and faculty or community members) can request an appointment and their request will be routed to the most appropriate librarian. (No one wants me answering in-depth research questions about primary sources in 17th century European history, for example.)
These requests come via an online form that dumps information into a home-grown MS Access database. For this kind of appointment-based research help we collect the same information recorded for reference desk questions, but also information about the student and the course the project is related to.
But our research help comes in other forms, too. We have an email-based ask a librarian service, and we all get email questions directly from students and faculty. At this point we aren’t very good at recording this type of information. What system should we use?
We also aren’t very good at recording questions that come directly to us from faculty, either via email, phone or in person.
And I haven’t even started to discuss the challenges of assessing the student learning outcomes associated with research and reference help services.
As a result of all this, it is difficult to get one complete picture of our involvement in research across campus. It’s something we are currently working to resolve.
And the biggest question that will influence how we do this is
“What do we want to do with this information?”
Change our services? Change our staffing levels? Merely collecting the data won’t be of use to anyone. The answer to these questions will influence the type of data that we collect and the tools we use to collect it.
And once we figure out all that, then we just need to remember to record everything.
One of the parlor tricks I occasionally do in an information literacy class is to the guess the name of a researcher’s PhD adviser, and sometimes their Post Doc adviser, simply by looking at a list of their publications. This is most impressive when the researcher in question is the faculty member I’m working with and can confirm or deny my guess.
Students are usually impressed, but it isn’t difficult: you just need to know a little something about the meaning behind the order of author names.
Scientific publications are rarely authored by just one person. More often, they have 3-6 authors, and sometimes many more, depending on the field. Publications in high energy physics and genetics can sometimes have hundreds of authors: the record (as far as I can tell) is an article related to the installation of the particle accelerator at CERN that lists the group as a lead author and almost 3,000 co-authors.
My colleagues in the humanities sometime have trouble understanding how so many people could be the author of a paper – they equate authorship with actual typing and writing of words. But in the sciences, the words aren’t the primary result – it’s the data, discoveries and conclusions that are important. As a result, scientific publications encourage contributors to list as authors anyone who made a significant contribution to the work.
The definition of “significant contribution” can vary by field, however, and it isn’t unheard of to see authors who only made a nominal contribution. In some places it was customary to add the department chair or lab PI as an author, even if he or she knew nothing about the work (see this 2006 article in Nature.) Some journals are attempting to get a better handle on this by asking contributors for a list of credits, who did what (see this example). And the medical community has outlined specific criteria for inclusion as an author.
Because of the quantity of authors, some thought has to go into how they will be ordered on the publication. The first author is typically the person who contributed the most to the publication, including carrying out the research and writing up the report. After that, it can get a bit tricky.
In order to combat the trickiness, various disciplines have evolved strategies to keep the peace. In some disciplines, additional authors are listed alphabetically. In others, authorship goes in order of who made the biggest contribution. Sometimes, the person who contributed the most (after the lead author) will go in second place, sometimes in last place.
I sometimes discuss author order in upper level classes. If a researcher understands how this works, their ability to search for additional relevant publications by author goes up.
Readers of this blog may be interested in a guest post I wrote for the Association of College and Research Libraries blog, ACRLog.
Last week I taught an information literacy class to a group of senior Chemistry students. We didn’t talk about databases or indexes, we talked about numbers. We talked about impact factors and h-indexes and alternative metrics, and the students loved it. Librarians have used these metrics for years in collection development, and have looked them up to help faculty with tenure and promotion packets. But many librarians don’t know where the numbers come from, or what some of the criticisms are.
From the ACRL? Or from the disciplinary organizations?
The ACRL Information Literacy standards have often frustrated me. I struggle to find their usefulness to my day-to-day work, since the content of most of my information literacy sessions comes from conversations with the professor of the class, and are geared directly to student assignments. As such, use of the standards usually involves fitting what I’m already doing back into the structure of the standards. And because the standards are meant to apply to all disciplines, they suffer from being both too vague and too specific at the same time.
I also don’t find them very useful when it comes to convincing faculty members that their students need to learn information literacy skills.
On the other hand, the information literacy standards that come from disciplinary organizations like the ACS and the APA might actually be useful.
First, the faculty members might actually care about them. Let’s be honest, when was the last time a faculty member was concerned about their students meeting the standards set out by the ACRL? They are busy enough trying to meet their own standards and goals.
Secondly, because the disciplinary standards have been developed by faculty in the disciplines, they are more likely to align with the skills needed in those particular disciplines. They are more likely to provide practical guidance about what to teach students, how papers and projects can be geared to meet the standards, and how this can be assessed.
Oh, and the disciplinary standards are typically shorter.
Am I abandoning the ACRL standards completely? Probably not. But I would encourage librarians to make sure they are aware of any education related standards and outcomes set forth by disciplines they work with. It might be useful.