Barricading an open access website – reflections on the attack on DOAB

As many of you might have noticed, last week the Directory of Open Access Books (DOAB) website was unavailable for several days. Sadly, the reason for this was not a technical glitch as we first suspected, but an actual attack on DOAB.

During the weekend of 21 January, someone decided to flood the Domain Name Server (DNS) of our registrar with requests for the DOAB record – a DDoS attack.

A short explanation: when you type in www.doabooks.org, your browser looks up this address in a Domain Name Server, which translates it to an IP address (that looks like, e.g., 123.123.123.123). However, when an attacker overloads the DNS with these lookups, this does not work any more. The result is that the website is working fine, but it can’t be reached.

Sadly, we were not immediately able to understand what was happening, as we did not have any previous experience with this kind of situation. Once the reason was clear, we could take measures. We moved our domain name registration to Cloudflare, a company that specialises in the protection against these kind of attacks. At the same time, we have done this for the OAPEN and the OA Books Toolkit websites as well.

Thank you for your patience with us as we navigated these new circumstances and please accept our apologies for the inconvenience caused. We hope that our explanation was clear, but of course, please contact us if you have any questions or remarks.

Open access books – measured in a context

For over a decade, there have been open access books platforms. Each of those platforms share usage data and when you are an author of an open access book, you would find that it has been downloaded a certain amount of times. But how should you interpret that number? Unfortunately, the answer is not straightforward. The usage is influenced by the language of the title, its subject, but also by the platform: not all platforms reach the same audiences; furthermore, there might be seasonable differences. For instance, usage of the OAPEN Library is lower in the months of June to August, compared to September to November.

So, it would be helpful to have some clarity. A possible solution is a new metric – the Transparent Open Access Normalized Index (TOANI) score. It is designed to provide a simple answer to the question of how well an individual open access book or chapter is performing. The transparency is based on clear rules, and by making all of the data used visible. The data is normalized, using a common scale for the complete collection of an open access book platform and – to keep the level of complexity as low as possible – the score is based on a simple metric: the usage is either average, below or above average.

How does it work? As a proof of concept, we analysed the usage data of over 18,000 books in the OAPEN Library. Each book was assigned one high level subject, and the language was categorized as either English, German or Other languages. Each book was placed in a group that combined one subject and one language. Within those groups, we looked at the usage data, and determined whether a book was having average, more or less downloads.

Between groups, there are large differences: for instance, a German-language book on Humanities with 300 downloads is doing better than average, while an English-language book on Humanities would need to have reached at least 652 downloads to reach the same level. Another example is the difference between titles on Language in German versus other languages. Here, German-language books downloaded more than 250 times are scoring better than average. For books in other languages the bar is much higher: 385.

In this way, we can see how well a book is performing, compared to similar titles. In other words: when we consider the context of a book, we can actually say if its usage is better than expected.

Read more in the newly published article by Ronald Snijder, “Measured in a context: making sense of open access book data,” Insights, 2023, 36: 20, 1–10; DOI: https://doi.org/10.1629/uksg.627

Books in a bubble

Nowadays, we refer to “bubbles” as online places where no information from outside is allowed in. But in this instance, the opposite is true: the bubbles are a tool to help visualise how well one set of books is performing, compared to other sets of books. The OAPEN Library is an open online platform, and recently we have audited ourselves, based on the POSI principles. However, apart from an infrastructure, it is also a library.

When our collection passed the 20,000 titles milestone, we felt it was time to assess our collection: how well does it perform? That is not a simple question to answer: assessments of libraries and their collections are taking place within a certain context. OAPEN is not a ‘traditional’ library with a mixed collection of physical and digital publications, and our collection criteria are perhaps a bit different: books should be peer reviewed and have an open license, but we welcome all languages and subjects. We are not linked to one ‘parent organisation’, but try to serve everybody.

Three types of stakeholders support the OAPEN Library: publishers, funders and libraries. Both publishers and funders contribute to the collection by making publications available. They will be interested in the dissemination of the books and chapters. For libraries, the composition of the collection will be paramount. How do the titles on offer fit within the information needs of their patrons?

The evaluation of the OAPEN collection should consider these two aspects. The dissemination of books and chapters is measured through the number of downloads – based on COUNTER R5 conformant data. The composition of the collection is measured among two axes: subject and language. Both dissemination and the content-related aspects are paired to the number of publications. So, we have to take into account three dimensions: number of titles, number of downloads and average downloads per title. On top of that, we need to look at the differences between languages and subjects. All in all, a complex mix.

Our solution was to use three-dimensional pictures: the bubbles.

Usage of social sciences books depicted as three-dimensional bubbles
Social sciences in the OAPEN Library collection

The bubbles display the composition of the collection and how its readers make use of it. Visualisations like this help to tell a complicated story in a simple way; a powerful instrument to guide the further development of the OAPEN Library.

More details can be found in this open access article:

Snijder, Ronald. ‘Books in a Bubble.: Assessing the OAPEN Library Collection’. JLIS.It 14, no. 2 (15 May 2023): 75–92. https://doi.org/10.36253/jlis.it-498.

Big in Japan, Zimbabwe or Brazil – global reach and national preferences for open access books

To many the internationalisation of academic publishing may mean: a strong focus on global issues, written in English only. However, many academic books are written in other languages than English. We tend to link non-English publications to regional issues, so there is a tension between English as the ‘lingua franca’ enabling a global reach, versus local languages that provide a better cultural ‘fit’.

M. Adiputra, CC BY-SA 3.0 via Wikimedia Commons

Now from theory to practice: if you give a global audience free access to (nearly) 20,000 freely accessible books and chapters in several languages, spanning many subjects, will they all choose books in English?

In a newly published paper, we have systematically researched the preferences of readers originating from one hundred countries. By looking at the ten most downloaded books from each country, we can measure the focus on regional topics by counting the books written in languages other than English.

Books, popular in multiple countries

The outcomes of this study do not fit in a story of English language publications as the only or the main source of scholarly communication. There is a demand for regionally focused titles, countering the narrative of the dominance of English as the language of scholarly communication. Instead, this study supports the value of bibliodiversity.

Read the paper here:

Snijder, Ronald. 2022. “Big in Japan, Zimbabwe or Brazil – Global Reach and National Preferences for Open Access Books”. Insights 35: 11. DOI: http://doi.org/10.1629/uksg.580

The OAPEN Library and the origin of downloads – libraries & academic institutions

On a regular basis, we look at the download data of the OAPEN Library and where it comes from. While examining the data from January to August 2021, we focused on the usage originating from libraries and academic institutions. Happily, we found that more than 1,100 academic institutions and libraries have used the OAPEN Library.

Of course, we do not actively track individual users. Instead we use a more general approach: we look at the website from which the download from the OAPEN Library originated. How does that work? For instance, when someone in the library of the University of Leipzig clicks on the download link of a book in the OAPEN library, two things happen: first, the book is directly available on the computer that person is working on, and second, the OAPEN server notes the ‘return address’: https://katalog.ub.uni-leipzig.de/. We have no way of knowing who the person is that started the download, we just know the request originated from the Leipzig University Library. Furthermore, some organisations choose to suppress sending their ‘return address’, making them anonymous.

What is helpful to us, is the fact that aggregators such as ExLibris, EBSCO or SerialSolutions use a specific return address. Examples are “west-sydney-primo.hosted.exlibrisgroup.com” – pointing to the library of the Western Sydney University – or “sfx.unibo.it”– coming from the library of the Università di Bologna. And in this way, many academic libraries can also be identified from their web address. Some academic institutions only display their ‘general’ address.

Academic libraries and institutions, sorted by type
Academic libraries and institutions

As mentioned before, our analysis delivered over 1,100 – 1,121 to be exact – different addresses. The chart displays those addresses divided by type, and we see that many academic libraries not just rely on aggregators such as ExLibris, but also directly give access to the OAPEN Library through their catalogs. The metadata of the OAPEN Library is freely available under a CC0 license, and can be downloaded as a MARCXML file to ensure easy library integration.

Which libraries and institutions are the biggest users of the OAPEN Library according to this data? The most downloads come from MediaLibraryOnLine, the first Italian network of public, academic and scholastic libraries for digital lending; the Bodleian Library of the University of Oxford; and Universidad Peruana de Ciencas Aplicados.

We are happy to see that our collection is finding its way to libraries and academic institutions all over the world!

Finding relevant books without sacrificing your privacy

Web retailers such as Amazon.com are able to find just the right book for you. This is a great feature, but it comes at a cost: its recommendations work because it is storing information about you. The better it knows you, the better its recommendations.

At OAPEN, we do not track people. Instead, we used the full text of the open access books and chapters in our collection. In an experiment – based on over 10,000 titles – we took the complete text of a book, cut it up in blocks of three consecutive words (called trigrams) and filtered out all the common phrases. This leaves you with a small group of terms that are unique for that particular book. The next phase is finding other titles that share the same terms. The more terms they share, the more they are connected.

Using this algorithm helps to find books that are very similar: if you are interested in a certain book, you should also download these books as well. However, it can also find books that are a little less similar: you might use this to expand your research, or to create a collection of books. Surprisingly enough, this algorithm can also find translations; it even works across languages.

Finding related titles in this way does not have to be confined to the OAPEN Library. The same method can be applied to other collections of open access books or even open access journal articles.

More information can be found in this article:

Snijder, R. (2021). Words Algorithm Collection—Finding  closely related open access books using text mining techniques. LIBER Quarterly: The Journal of the Association of European Research Libraries, 31(1). https://liberquarterly.eu/article/view/10938

It’s the system that counts

You would expect it to be simple: when somebody downloads a book from the OAPEN Library, the system adds one to the total number of downloads. After a while you put the numbers in a report, and share it with the world. Sadly, the reality is more complex. All the books and chapters can be downloaded by everybody, including automated processes (bots). Also, if you think as downloads as a measure of impact, it becomes tempting to inflate it by downloading a certain book again and again.

So, the raw download numbers need to be filtered, in order to give a more realistic indication of the true impact. Many libraries use the COUNTER Code of Practice as standard, which enables them to compare the data from different sources. However, many online platforms measure their visitors using Google Analytics. The OAPEN Library uses both (but we only report the COUNTER data). Together with the migration to a new platform, a new version of the COUNTER reporting (Release 5) was introduced. A good moment to compare Google Analytics (GA) with COUNTER Release 5 (R5).

Comparing the amounts of monthly downloads is simple: where GA reports over 1 million downloads per month, R5 stricter filter lets it report around 400,000 downloads. Again, when we look at the details, the reality is more complex. For instance, comparing the number of downloads per country shows large differences for the USA, France, China and Russia. In contrast, the numbers for Australia, Canada and Austria are virtually the same. When we compare the usage data of each title, the differences are even less simple to explain. You would expect that both GA an R5 more or less agree about the order of books: which book was downloaded the most, which one comes after that etc. But that is very much not the case.

GA and R5 have made their own choices on what is reported and what not. One metric is not better than the other, but we should be open about the choices made. After all, open access book metrics are complicated and we can only benefit from clarity.

More details about usage data and the two systems can be found in:

Ronald Snijder, “Open access book usage data – how close is COUNTER to the other kind?,” Insights 34 (1): 9. (2021), https://doi.org/10.1629/uksg.539.
Submitted on 11 November 2020 and published by UKSG in association with Ubiquity Press on 28 April 2021

You might also be interested in the OAeBU DataTrust Pilot or this OBP blog. Things get even more complex when you try to compare different platforms…

Google Scholar and the OAPEN Library

Since October 2015, the contents of the OAPEN Library have been indexed by Google Scholar; which was proudly announced by Francis Pinter. Today, Google Scholar lists over 36,000 books and chapters that can be found in the OAPEN Library. In the last year alone, over 2,800 titles were added. This is a huge success.

This number is so large because Google Scholar not just indexes books, it also identifies separate chapters published in edited volumes in the OAPEN Library. For instance, clicking on the chapter “De-globalisation, value chains and reshoring” or clicking on the chapter “Transformative paths, multi-scalarity of knowledge bases and Industry 4.0” both link to the same book in the OAPEN Library: http://library.oapen.org/handle/20.500.12657/37355.

What have we done to optimize the OAPEN Library for Google Scholar? Starting November 2019 we had several discussions with colleagues at Google Scholar, resulting in updated specification for the metadata to be used in our new DSpace environment. Each landing page – which describes a book or a chapter – also contains machine-readable metadata. This metadata is read by the Google Scholar crawler. 

It’s nice to see how this has worked out. For example, in the last 7 days (8 to 14 May) almost 17,500 people visited the OAPEN Library through Google Scholar. A result we are very happy with, and we plan to continue working with Google Scholar to optimise indexing.

Next, we will focus on optimizing Google Scholar setups for monographs versus edited volumes, which are handled differently by the indexing system.

Think.Check.Submit – books

New resource for books added to Think. Check. Submit.

Further to their announcement in October, the Steering Committee of Think. Check. Submit. is delighted to announce a new addition to its resources: a checklist for authors wishing to verify the reliability and trustworthiness of a book or monograph publisher.

Drawing on existing expertise from within the group and from experiences of their newest partner, OAPEN, the checklist for books offers sound advice along the lines of the recommendations already offered by the journal checklist.

The rest of the Think. Check. Submit. website has also been updated to make it more relevant for both books and journals.

Eelco Ferwerda, Director of OAPEN, said: “It is clear that the same issues confronted by authors looking to publish in journals, also confront authors seeking to publish a book or a chapter. What should an author look for when considering the submission of a manuscript of a book or chapter? It is very common for a journal to have its own homepage whereas, for chapter publishing, more careful scrutiny is needed of the publishing entity producing the books. I am sure that the addition of this new checklist will provide a much needed and welcome resource to authors publishing this way.”

Sofie Wennström of LIBER said: “Librarians are often asked for advice about trustworthy publishing outlets. This extension of the Think. Check. Submit. checklist includes solid recommendations for how to choose a publisher for books. It is a welcome addition to the resources a librarian can use when giving advice as it is evident that the book publishing market may offer as many, if not more, pitfalls as journal publishing. We are happy that Think. Check. Submit. offers an opportunity to avoid some of those obstacles.”

About Think. Check. Submit.

Think. Check. Submit. helps researchers identify trusted journals and publishers for their research. Through a range of tools and practical resources, this international, cross-sector initiative aims to educate researchers, promote integrity, and build trust in credible research and publications.

Think. Check. Submit. provides a checklist that guides researchers through the process of deciding which journals and now books are best for their research. The process is intended to go beyond individual journal decisions to help researchers build up their journal evaluation skills. The checklist is now available in nearly 40 languages.

Think. Check. Submit. is run, and funded, by a coalition from across scholarly communications in response to discussions about deceptive publishing. Details of the organizations contributing can be found at https://thinkchecksubmit.org/about/. The current Think. Check. Submit. committee can be found at http://thinkchecksubmit.org/faq/committee/

Search OpenEdition Search

You will be redirected to OpenEdition Search