Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Open access books – measured in a context

For over a decade, there have been open access books platforms. Each of those platforms share usage data and when you are an author of an open access book, you would find that it has been downloaded a certain amount of times. But how should you interpret that number? Unfortunately, the answer is not straightforward. The usage is influenced by the language of the title, its subject, but also by the platform: not all platforms reach the same audiences; furthermore, there might be seasonable differences. For instance, usage of the OAPEN Library is lower in the months of June to August, compared to September to November.

So, it would be helpful to have some clarity. A possible solution is a new metric – the Transparent Open Access Normalized Index (TOANI) score. It is designed to provide a simple answer to the question of how well an individual open access book or chapter is performing. The transparency is based on clear rules, and by making all of the data used visible. The data is normalized, using a common scale for the complete collection of an open access book platform and – to keep the level of complexity as low as possible – the score is based on a simple metric: the usage is either average, below or above average.

How does it work? As a proof of concept, we analysed the usage data of over 18,000 books in the OAPEN Library. Each book was assigned one high level subject, and the language was categorized as either English, German or Other languages. Each book was placed in a group that combined one subject and one language. Within those groups, we looked at the usage data, and determined whether a book was having average, more or less downloads.

Between groups, there are large differences: for instance, a German-language book on Humanities with 300 downloads is doing better than average, while an English-language book on Humanities would need to have reached at least 652 downloads to reach the same level. Another example is the difference between titles on Language in German versus other languages. Here, German-language books downloaded more than 250 times are scoring better than average. For books in other languages the bar is much higher: 385.

In this way, we can see how well a book is performing, compared to similar titles. In other words: when we consider the context of a book, we can actually say if its usage is better than expected.

Read more in the newly published article by Ronald Snijder, “Measured in a context: making sense of open access book data,” Insights, 2023, 36: 20, 1–10; DOI: https://doi.org/10.1629/uksg.627

Books in a bubble

Nowadays, we refer to “bubbles” as online places where no information from outside is allowed in. But in this instance, the opposite is true: the bubbles are a tool to help visualise how well one set of books is performing, compared to other sets of books. The OAPEN Library is an open online platform, and recently we have audited ourselves, based on the POSI principles. However, apart from an infrastructure, it is also a library.

When our collection passed the 20,000 titles milestone, we felt it was time to assess our collection: how well does it perform? That is not a simple question to answer: assessments of libraries and their collections are taking place within a certain context. OAPEN is not a ‘traditional’ library with a mixed collection of physical and digital publications, and our collection criteria are perhaps a bit different: books should be peer reviewed and have an open license, but we welcome all languages and subjects. We are not linked to one ‘parent organisation’, but try to serve everybody.

Three types of stakeholders support the OAPEN Library: publishers, funders and libraries. Both publishers and funders contribute to the collection by making publications available. They will be interested in the dissemination of the books and chapters. For libraries, the composition of the collection will be paramount. How do the titles on offer fit within the information needs of their patrons?

The evaluation of the OAPEN collection should consider these two aspects. The dissemination of books and chapters is measured through the number of downloads – based on COUNTER R5 conformant data. The composition of the collection is measured among two axes: subject and language. Both dissemination and the content-related aspects are paired to the number of publications. So, we have to take into account three dimensions: number of titles, number of downloads and average downloads per title. On top of that, we need to look at the differences between languages and subjects. All in all, a complex mix.

Our solution was to use three-dimensional pictures: the bubbles.

Usage of social sciences books depicted as three-dimensional bubbles
Social sciences in the OAPEN Library collection

The bubbles display the composition of the collection and how its readers make use of it. Visualisations like this help to tell a complicated story in a simple way; a powerful instrument to guide the further development of the OAPEN Library.

More details can be found in this open access article:

Snijder, Ronald. ‘Books in a Bubble.: Assessing the OAPEN Library Collection’. JLIS.It 14, no. 2 (15 May 2023): 75–92. https://doi.org/10.36253/jlis.it-498.

It’s the system that counts

You would expect it to be simple: when somebody downloads a book from the OAPEN Library, the system adds one to the total number of downloads. After a while you put the numbers in a report, and share it with the world. Sadly, the reality is more complex. All the books and chapters can be downloaded by everybody, including automated processes (bots). Also, if you think as downloads as a measure of impact, it becomes tempting to inflate it by downloading a certain book again and again.

So, the raw download numbers need to be filtered, in order to give a more realistic indication of the true impact. Many libraries use the COUNTER Code of Practice as standard, which enables them to compare the data from different sources. However, many online platforms measure their visitors using Google Analytics. The OAPEN Library uses both (but we only report the COUNTER data). Together with the migration to a new platform, a new version of the COUNTER reporting (Release 5) was introduced. A good moment to compare Google Analytics (GA) with COUNTER Release 5 (R5).

Comparing the amounts of monthly downloads is simple: where GA reports over 1 million downloads per month, R5 stricter filter lets it report around 400,000 downloads. Again, when we look at the details, the reality is more complex. For instance, comparing the number of downloads per country shows large differences for the USA, France, China and Russia. In contrast, the numbers for Australia, Canada and Austria are virtually the same. When we compare the usage data of each title, the differences are even less simple to explain. You would expect that both GA an R5 more or less agree about the order of books: which book was downloaded the most, which one comes after that etc. But that is very much not the case.

GA and R5 have made their own choices on what is reported and what not. One metric is not better than the other, but we should be open about the choices made. After all, open access book metrics are complicated and we can only benefit from clarity.

More details about usage data and the two systems can be found in:

Ronald Snijder, “Open access book usage data – how close is COUNTER to the other kind?,” Insights 34 (1): 9. (2021), https://doi.org/10.1629/uksg.539.
Submitted on 11 November 2020 and published by UKSG in association with Ubiquity Press on 28 April 2021

You might also be interested in the OAeBU DataTrust Pilot or this OBP blog. Things get even more complex when you try to compare different platforms…