To many the internationalisation of academic publishing may mean: a strong focus on global issues, written in English only. However, many academic books are written in other languages than English. We tend to link non-English publications to regional issues, so there is a tension between English as the ‘lingua franca’ enabling a global reach, versus local languages that provide a better cultural ‘fit’.
M. Adiputra, CC BY-SA 3.0 via Wikimedia Commons
Now from theory to practice: if you give a global audience free access to (nearly) 20,000 freely accessible books and chapters in several languages, spanning many subjects, will they all choose books in English?
In a newly published paper, we have systematically researched the preferences of readers originating from one hundred countries. By looking at the ten most downloaded books from each country, we can measure the focus on regional topics by counting the books written in languages other than English.
Books, popular in multiple countries
The outcomes of this study do not fit in a story of English language publications as the only or the main source of scholarly communication. There is a demand for regionally focused titles, countering the narrative of the dominance of English as the language of scholarly communication. Instead, this study supports the value of bibliodiversity.
Read the paper here:
Snijder, Ronald. 2022. “Big in Japan, Zimbabwe or Brazil – Global Reach and National Preferences for Open Access Books”. Insights 35: 11. DOI: http://doi.org/10.1629/uksg.580
On a regular basis, we look at the download data of the OAPEN Library and where it comes from. While examining the data from January to August 2021, we focused on the usage originating from libraries and academic institutions. Happily, we found that more than 1,100 academic institutions and libraries have used the OAPEN Library.
Of course, we do not actively track individual users. Instead we use a more general approach: we look at the website from which the download from the OAPEN Library originated. How does that work? For instance, when someone in the library of the University of Leipzig clicks on the download link of a book in the OAPEN library, two things happen: first, the book is directly available on the computer that person is working on, and second, the OAPEN server notes the ‘return address’: https://katalog.ub.uni-leipzig.de/. We have no way of knowing who the person is that started the download, we just know the request originated from the Leipzig University Library. Furthermore, some organisations choose to suppress sending their ‘return address’, making them anonymous.
What is helpful to us, is the fact that aggregators such as ExLibris, EBSCO or SerialSolutions use a specific return address. Examples are “west-sydney-primo.hosted.exlibrisgroup.com” – pointing to the library of the Western Sydney University – or “sfx.unibo.it”– coming from the library of the Università di Bologna. And in this way, many academic libraries can also be identified from their web address. Some academic institutions only display their ‘general’ address.
Academic libraries and institutions
As mentioned before, our analysis delivered over 1,100 – 1,121 to be exact – different addresses. The chart displays those addresses divided by type, and we see that many academic libraries not just rely on aggregators such as ExLibris, but also directly give access to the OAPEN Library through their catalogs. The metadata of the OAPEN Library is freely available under a CC0 license, and can be downloaded as a MARCXML file to ensure easy library integration.
Which libraries and institutions are the biggest users of the OAPEN Library according to this data? The most downloads come from MediaLibraryOnLine, the first Italian network of public, academic and scholastic libraries for digital lending; the Bodleian Library of the University of Oxford; and Universidad Peruana de Ciencas Aplicados.
We are happy to see that our collection is finding its way to libraries and academic institutions all over the world!
Web retailers such as Amazon.com are able to find just the right book for you. This is a great feature, but it comes at a cost: its recommendations work because it is storing information about you. The better it knows you, the better its recommendations.
At OAPEN, we do not track people. Instead, we used the full text of the open access books and chapters in our collection. In an experiment – based on over 10,000 titles – we took the complete text of a book, cut it up in blocks of three consecutive words (called trigrams) and filtered out all the common phrases. This leaves you with a small group of terms that are unique for that particular book. The next phase is finding other titles that share the same terms. The more terms they share, the more they are connected.
Using this algorithm helps to find books that are very similar: if you are interested in a certain book, you should also download these books as well. However, it can also find books that are a little less similar: you might use this to expand your research, or to create a collection of books. Surprisingly enough, this algorithm can also find translations; it even works across languages.
Finding related titles in this way does not have to be confined to the OAPEN Library. The same method can be applied to other collections of open access books or even open access journal articles.
More information can be found in this article:
Snijder, R. (2021). Words Algorithm Collection—Finding closely related open access books using text mining techniques. LIBER Quarterly: The Journal of the Association of European Research Libraries, 31(1). https://liberquarterly.eu/article/view/10938
You would expect it to be simple: when somebody downloads a book from the OAPEN Library, the system adds one to the total number of downloads. After a while you put the numbers in a report, and share it with the world. Sadly, the reality is more complex. All the books and chapters can be downloaded by everybody, including automated processes (bots). Also, if you think as downloads as a measure of impact, it becomes tempting to inflate it by downloading a certain book again and again.
So, the raw download numbers need to be filtered, in order to give a more realistic indication of the true impact. Many libraries use the COUNTER Code of Practice as standard, which enables them to compare the data from different sources. However, many online platforms measure their visitors using Google Analytics. The OAPEN Library uses both (but we only report the COUNTER data). Together with the migration to a new platform, a new version of the COUNTER reporting (Release 5) was introduced. A good moment to compare Google Analytics (GA) with COUNTER Release 5 (R5).
Comparing the amounts of monthly downloads is simple: where GA reports over 1 million downloads per month, R5 stricter filter lets it report around 400,000 downloads. Again, when we look at the details, the reality is more complex. For instance, comparing the number of downloads per country shows large differences for the USA, France, China and Russia. In contrast, the numbers for Australia, Canada and Austria are virtually the same. When we compare the usage data of each title, the differences are even less simple to explain. You would expect that both GA an R5 more or less agree about the order of books: which book was downloaded the most, which one comes after that etc. But that is very much not the case.
GA and R5 have made their own choices on what is reported and what not. One metric is not better than the other, but we should be open about the choices made. After all, open access book metrics are complicated and we can only benefit from clarity.
More details about usage data and the two systems can be found in:
Ronald Snijder, “Open access book usage data – how close is COUNTER to the other kind?,” Insights 34 (1): 9. (2021), https://doi.org/10.1629/uksg.539. Submitted on 11 November 2020 and published by UKSG in association with Ubiquity Press on 28 April 2021
You might also be interested in the OAeBU DataTrust Pilot or this OBP blog. Things get even more complex when you try to compare different platforms…
You are probably familiar with USB cables, the type of standard cable that allows me to connect a printer and a phone to my laptop, using the same port. However, the cable I use for my phone does not fit my printer. If I were to use one cable, I would either have to buy a new printer or use some kind of connector.
This is similar to ONIX: an XML standard designed to exchange metadata about books between all the different actors in the book supply chain. Here, the standard also comes in different sizes. For instance, an author might be coded as <b037>Snijder, Ronald</b037> or as <PersonNameInverted>Snijder, Ronald</PersonNameInverted>. Moreover, the current version of ONIX is 3, but publishers who have invested quite a lot in version 2.1 might be reluctant to give that up. These differences in standards and approaches can result in a complex situation for actors in the ebook supply chain; more details in this draft report by the Book Industry Group.
So, at OAPEN we need to make sure that all those different flavours of ONIX fit our import system and therefore we built a connector. In fact, we built two: one that converts different types of ONIX XML to Excel, and one that converts Excel to ‘our’ ONIX standard.
Why two conversions? Because we need to do a manual check on the metadata we received: does it contain titles that are already part of the OAPEN Library collection? Is there – still – metadata missing? Is there metadata that needs to be amended? When the metadata has been checked and updated, we convert the resulting Excel file into ONIX XML and import it into the OAPEN Library.
If you are interested in trying this out yourself, please mail me at r.snijder[@]oapen.org. But if you want to wait for a more user-friendly and better developed tool, you should head over to the COPIM project. In this project, development is on the way of Thoth, a tool to manage open access book metadata: https://copim.pubpub.org/pub/open-metadata-thoth/release/1.
Connecting different things – whether appliances or systems – requires work and is never easy. But we are trying to create a better fitting solution.
It all started with a preprint. Back in February 2020, I came across the article “Increasing Visibility of Open Access Materials in a Library Catalog: Case Study at a Large Academic Research Library”. Here, Jeff Edmunds and Ana Enriquez describe their work to highlight existing and add more open access titles to their library catalogue at Penn State University Libraries. Simply put, they investigated ways to make it easier for library patrons to discover open access books, by making them more visible in their library catalogue. Library catalogues ingest metadata based on the MARC standard, and the authors discussed recent developments regarding MARC and open access.
Their timing was perfect. The OAPEN Library was to be migrated to the DSpace 6 platform, and this was the right moment to revise our metadata offerings. Thus, I contacted Jeff Edmunds and asked how we could improve our MARC metadata feeds. This was the start of an involved conversation about the general quality of our metadata records and the presence of OA markers to make clear that the title described is open access.
Based on the recommendations, we updated our “MARC exports” in several ways. Firstly, each MARC record now contains both a link to directly download the book – or chapter – and a link to the landing page in the OAPEN Library. In this way, readers have the option to first assess the book in the OAPEN Library or they can directly get the title. Another improvement was to ‘move’ the license URL to MARC field 540, which helps libraries to automatically process in what way the books can be used. Furthermore, to make sure that the cataloguing software automatically flags the OAPEN Library content as open access, we added specific markup to the 506 and 856$7 fields.
The result
The result of all this? In August, Penn State University Libraries added close to 10,000 new open access records to their catalogue – based on the contents of the OAPEN Library. The import was based on this MARCXML feed, and converted to MARC21 using MarcEdit. To allow for the character limit of the identification field used by the cataloguing software, the HANDLE identifier used by OAPEN – for instance https://library.oapen.org/handle/20.500.12657/25287 – was shortened to “oapen25287”.
At Penn State University Libraries, this is how it looks in “The CAT”, searching on “oapen library”:
Penn State University Libraries are transitioning from their old ‘classic’ catalogue to a new layer built using Blacklight. The plan is to add an Open Access facet or search limit in the new catalogue, which will allow users to find all OA content as a group. In the updated catalogue, the results look like this:
When we look at a single book record, the open license is clearly visible. Also, there are multiple links – in this case a link to the landing page in the OAPEN Library and a link to directly download the PDF version and a separate link that opens the EPUB version of this book:
The catalogue can be searched online, using this link.
Starting September, Penn State Libraries will update their “OAPEN collection” monthly.
Future wish list
While we are quite happy with the results, there are still some things that can be improved. To start with a technicality, the MARC Leader field is lacking a few characters at the end:
“=LDR 02817naaaa 00349uu” should be “=LDR 02817naaaa 00349uu 4500”. More important is the planned conversion of the Directory of Open Access Books (DOAB) to DSpace. The new DOAB platform will have the same export features as the OAPEN Library, including the MARC and MARC XML feeds.
Ideally, the descriptions in the OAPEN Library and DOAB would have name headings in an authorised form and WorldCat subject headings. These features will remain on our future wish list, at least for now.
Add the OAPEN Library collection to your catalogue?
Our cooperation with Penn State University Libraries was successful, especially because of the guidance we received regarding our MARC metadata output. We hope that other libraries will follow their example and add the OAPEN Library collection to their catalogue. If you have any questions, please contact me via r.snijder[@]oapen.org, or drop me a line on Twitter @ronaldsnijder.
The OAPEN Library has become a highly connected resource for full text open access books and chapters: used by the Directory of Open Access Books (DOAB); libraries; data aggregators and search engines – for instance Google Scholar.
Consequently, the quality of the metadata is important to us and when it came to our attention that not all authors and editors were listed in the correct order, we started looking for a solution. This is not a trivial matter: at this moment the OAPEN Library contains over 12,000 books and chapters.
To complicate matters, the metadata – collected for a decade – describing these publications come from a variety of sources. A manual check was not an option, so we searched for a resource that would enable us to check the author and editors listed.
We decided to use the freely accessible CrossRef API. OAPEN is a member of CrossRef – and has deposited hundreds of DOIs – but more importantly, the CrossRef metadata database lists over 1.4 million books.
First, we made a selection of over 2,200 books and chapters with more than one author or editor. Using the DOI connected to these publications, we retrieved the ORCID metadata. Then we checked the metadata found against our own records, resulting in a correction of over 1,000 title descriptions.
Every now and then we will perform this and other metadata checks, to make sure that everything remains… in order.
Since October 2015, the contents of the OAPEN Library have been indexed by Google Scholar; which was proudly announced by Francis Pinter. Today, Google Scholar lists over 36,000 books and chapters that can be found in the OAPEN Library. In the last year alone, over 2,800 titles were added. This is a huge success.
What have we done to optimize the OAPEN Library for Google Scholar? Starting November 2019 we had several discussions with colleagues at Google Scholar, resulting in updated specification for the metadata to be used in our new DSpace environment. Each landing page – which describes a book or a chapter – also contains machine-readable metadata. This metadata is read by the Google Scholar crawler.
It’s nice to see how this has worked out. For example, in the last 7 days (8 to 14 May) almost 17,500 people visited the OAPEN Library through Google Scholar. A result we are very happy with, and we plan to continue working with Google Scholar to optimise indexing.
Next, we will focus on optimizing Google Scholar setups for monographs versus edited volumes, which are handled differently by the indexing system.
New resource for books added to Think. Check. Submit.
Further to their announcement in October, the Steering Committee of Think. Check. Submit. is delighted to announce a new addition to its resources: a checklist for authors wishing to verify the reliability and trustworthiness of a book or monograph publisher.
Drawing on existing expertise from within the group and from experiences of their newest partner, OAPEN, the checklist for books offers sound advice along the lines of the recommendations already offered by the journal checklist.
The rest of the Think. Check. Submit. website has also been updated to make it more relevant for both books and journals.
Eelco Ferwerda, Director of OAPEN, said: “It is clear that the same issues confronted by authors looking to publish in journals, also confront authors seeking to publish a book or a chapter. What should an author look for when considering the submission of a manuscript of a book or chapter? It is very common for a journal to have its own homepage whereas, for chapter publishing, more careful scrutiny is needed of the publishing entity producing the books. I am sure that the addition of this new checklist will provide a much needed and welcome resource to authors publishing this way.”
Sofie Wennström of LIBER said: “Librarians are often asked for advice about trustworthy publishing outlets. This extension of the Think. Check. Submit. checklist includes solid recommendations for how to choose a publisher for books. It is a welcome addition to the resources a librarian can use when giving advice as it is evident that the book publishing market may offer as many, if not more, pitfalls as journal publishing. We are happy that Think. Check. Submit. offers an opportunity to avoid some of those obstacles.”
About Think. Check. Submit.
Think. Check. Submit. helps researchers identify trusted journals and publishers for their research. Through a range of tools and practical resources, this international, cross-sector initiative aims to educate researchers, promote integrity, and build trust in credible research and publications.
Think. Check. Submit. provides a checklist that guides researchers through the process of deciding which journals and now books are best for their research. The process is intended to go beyond individual journal decisions to help researchers build up their journal evaluation skills. The checklist is now available in nearly 40 languages.
Think. Check. Submit. is run, and funded, by a coalition from across scholarly communications in response to discussions about deceptive publishing. Details of the organizations contributing can be found at https://thinkchecksubmit.org/about/. The current Think. Check. Submit. committee can be found at http://thinkchecksubmit.org/faq/committee/