“I fight for the users.”
Tron: Legacy

As much as I hate library vendors because of how their profit model relies on exploiting our contrived and cultish reverence for tradition, my heart goes out to them for having to deal with a customer base such as theirs. Given many librarians’ blind opposition to revised methods for just about anything, getting us adjusted to new systems requires more time and effort than it should.

The degree to which the developers of these products elicit and act upon feedback from their subscribers varies considerably. For example, the way that ScienceDirect filters lists of recommended articles from the bX service to only show records from Elsevier-owned titles is not the sort of behavior which I suspect any librarians ever ask for. One person’s defect is another’s feature.

Our main Library Services Platform, Ex Libris’ Alma, is very much in the “perpetual beta.” Frequent updates come with bug fixes and new features. However, any revisions to software can also break stuff that used to work. Truth be told, I’m starting to question if things are really getting any better. Maybe it’s fatigue from (among other things) receiving an increasing number of linking error messages, but for several aspects of the system, the question of if they’re actually improving with each new release doesn’t have a clear answer.

Case in point, our latest re-index job took over four days to complete. Granted, we’re now dealing with over 13.5 million records, and it ate up a lot of extra server memory because we recently had to remove most of the COVID-19 trials—haven’t you heard? the pandemic, at least according to those content providers, is apparently over—yet it also feels as if this sort of thing shouldn’t happen on a production system that’s been in place for the better part of a decade.

I’ve gone back and forth on whether or not we should be accepting trial access to products we’re almost certainly never going to buy in the first place. When the lockdown first hit, quite a few publishers opened up their paywalls, and it seemed relatively harmless to accept temporary rights to view a slew of what’s normally expensive content, regardless of the message that it would send, especially when access was shut off.

Besides, if you believe that linking to unrestricted yet shady aggregators is acceptable, where are your entries for Sci-Hub and Libgen on the library’s list of databases? Until we can successfully mandate the behavior of our constituents to only publish their research in open access and open data platforms, who are we to forbid them from using any means necessary to quickly obtain desired scholarship? After all, while certain ways of sharing copyrighted material may be crossing the line, library stacks excepted, simply downloading it isn’t illegal, at least according to some interpretations of the law.

Moreover, the likes of arXiv and SPARC are decades old. Why are we still needing to beseech faculty members if we can, pretty please with sugar on top, stop subscribing to this admittedly prestigious title with an accompanying cost per use of $578.83? And by the way, I didn’t just make that number up. In light of everything else going on nowadays, what kind of a library would choose to continue such expenditures when faced with the threats to paying their workers?

Throughout the coronavirus outbreak, along with the need for technical troubleshooting, our overall question intake has remained steady, although most other usage metrics (i.e., gate counts and web traffic) have decreased. Irrespective of the pandemic, I wish these types of numbers were lower.

When I started my job, I taught classes on how to use Boolean operators, helped people at the Reference Desk who didn’t know to omit initial articles when searching by title, and regularly had to explain why you didn’t use your social security number to sign in. And now I deal with a steady stream of defect reports which are generated because our linked data system is rather unintelligent.

In each of these cases, successful efforts to reduce the quantity of searches and questions that patrons must commit in order to complete their research will likewise decrease our use statistics. We should therefore avoid the broken window fallacy when it comes to associating our measured value and usefulness with this kind of transactional data. It’s also worth noting the principle of Goodhart’s Law: whenever a unit of measurement becomes a goal in and of itself, it ceases to be an effective one.

With regards to determining the best ways of improving the systems we use, aside from their internal development process, Ex Libris offers multiple channels for its customer community to submit requests for how future versions of the software should operate. There is a formal balloting process for enhancements, which literally requires vote buying, in that only paying members of a user group are allowed to submit and select proposals, and a relatively newer more open polling forum, where anyone with an e-mail address may propose changes and upvote a limited number of them.

One problem with the enhancement process is how, as a consortial customer, there are desired modifications to various specialized features which, in spite of their usefulness, will never receive enough votes because not enough customers would benefit from them. There have been other top vote-getters, conversely, that Ex Libris has previously responded to with an essentially dismissive, “yeah, but it would be hard to program this,” bringing to mind the kind of lip service many We the People” petitions have elicited from the White House.

Furthermore, a lot of the ideas put forth by my fellow librarians appear to be less focused on giving our users what they want than ensuring that we can keep teaching the same procedures for conducting research, year after year. “Make discovery layers function like OPACs,” in other words, and halt improvements that make them behave like Google, especially when it comes to presenting fuzzy search results. The contorted reasoning behind why end users would need to view the MARC record is a classic example, while “Remove CDI constant expansion of results” is a more recent one.

Consider how so many of our practices continue in the digital age for no apparent logical reason. We can’t be making decisions with regard to user interfaces in a vacuum, nor should we be so brazen about dogmatically seeking only evidence, particularly usability data, that supports our preconceived and immovable design opinions. Another danger is the type of survivorship bias evident when we make design choices based on the perceptions of librarians who only listen to patrons that criticize existing systems.

All too often, I’ve seen this process play out:

  1. A library vendor conducts user testing.
  2. The vendor develops new or revised features based on their results.
  3. Librarians voice complaints about the changes.
  4. Not wanting to lose business, the vendor rolls back their updates, possibly before they even see the light of day.

The problem, ultimately, is that the true customers of these library vendors, even though we’re the ones paying the bills and making purchasing decisions, at least as far as the front end display is concerned, are library users.

As much as a college freshman is ill-suited to be designing a library research tool, calling on librarians to voice their opinions on how these interfaces should operate is therefore in a way as equally inappropriate. When it comes to shaping how such products are developed, many choices are unfortunately based largely upon how librarians want them to function. We just don’t stay enough in our lane. On top of all that, how these services are often marketed primarily to library directors is perhaps the root cause for their greatest failings of all.

Further Reading

Librarian at the University of Wisconsin–Milwaukee