Bento-style Results, Reconsidered
The video game company Blizzard Entertainment recently announced that its bungled launch of the updated version of Diablo II was partially due to an inability to anticipate how much “modern player behavior” would increase the demand on their servers. Because a player’s time spent on character advancement can be optimized by rapidly creating new instances in order to engage in repetitive while not particularly meaningful gameplay, rather than progressing through the story mode of the game as it was originally intended to be experienced, the bulk of its active player base commits to such min-maxing behavior.
Like it or not, meticulously planning out a character build, investigating how to complete contrived achievements, and researching similar tactics, sometimes to the point of looking for exploitative flaws in the game’s design via emergent gameplay, are all nowadays a commonplace type of metagaming. Blizzard is also a company, it should be noted, which is no stranger to harsh criticisms of their product development decisions. It has been mired in a slew of other controversies, involving everything from widespread sexual harassment to stifling free speech about Hong Kong. This includes a lawsuit filed against the makers of a bot program which automated certain aspects of the aforementioned repetitive gameplay.
Learning how to learn, studying how to study, and researching how to do research are the analogous practices of metagaming in the field of education. When it comes to how people use library interfaces, bibliographic instruction certainly has its place, but can only do so much when we’re dealing with platforms that aren’t exactly designed for modern researcher behavior.
In their efforts to provide a “one stop shopping” experience, library discovery layers pull together records from many different kinds of sources, giving rise to potential problems with interoperability, among other things. There are tools available to eliminate duplicate records and group similar works, yet they are far from perfect.
The introduction of bad controlled vocabulary is one particular concern. It’s not just that people don’t type in “microcomputers” when searching anymore, but how many current subject headings contain otherwise problematic terms. The official LCSH for the forced imprisonment of Japanese-American citizens during WWII, for instance, was changed this year from “Japanese Americans—Evacuation and relocation, 1942–1945.”
Another regularly-occurring issue with our discovery layer, and this is fast becoming something of a joke in the profession, is how book reviews are represented in results. Because items in different formats are often blended together, any search for a book title will invariably bring up reviews for it as well. We can rig the display a bit by boosting local materials in the rankings. However, there is still a common confusion amongst our clientele about what sorts of results they’re looking at when using our default “Everything” search scope.
Put yourself in the shoes of a student looking for a way out of buying a course textbook. You plug the title into the library’s search box and get this back:
Those “Online Access” and “Download PDF” links are rather tempting, never mind the fact that upon further review they’re clearly only good for obtaining a 1-page book review. Once you’ve had this pointed out, it’s easier to know what to look for, but solving this relatively frequent user error one frustrated patron at a time doesn’t feel like the best approach to the situation.
A key part of our problem is that the discovery layer’s algorithm doesn’t usually know these are book reviews — only that it has a bunch of records for short articles with metadata which identically matches the inputted character strings it was given. This failure should be seen as a red flag, indicative of how the AI behind discovery layers is not fully capable of delivering its results in the most user-friendly arrangement.
We’ve been at this for over a decade. In 2010, an industry report declared, “Library systems need to look and function more like search engines, i.e., Google and Yahoo, and Web services, i.e., Amazon.com, since these are familiar to users who are comfortable and confident in using them.” Given how propaganda spreads like wildfire through social media, maybe it’s time we stop trying to entirely mimic commercial platforms. The incorporation of algorithmic bias in discovery layers has already been documented. And whereas search engines present results for web pages, library databases contain records for books, articles, media, and so much more.
Just as web searchers often have difficulties discerning those tiny gray-on-white disclaimers that say “sponsored results,” (see also many modern users’ inability to save files outside of the cloud properly, let alone recognize what a save icon actually symbolizes), some library searchers, no matter how many “What am I looking at?” iterations of instructional materials we put out there, don’t natively understand what’s being shown to them. Another problem on the rise, for example, is people reporting purported errors in being able to watch online movies that we have records for the DVDs (or worse) of.
I have long viewed web-scale discovery layers as a necessary “Googlization” of tools used in the library research process, and any segregation of our results as an unnecessary throwback to how things used to be done solely due to the technological silos which previously existed. My fundamental error with this approach is that although it’s not a problem for me to have integrated results, it obviously remains one, based on our virtual reference traffic, for others.
I recently read a character of author Brandon Sanderson proclaim, “We all see the world by some kind of light personal to us, and that light changes our perception.” Perhaps this is true. My discovery layer’s vendor has a site where librarian customers can vote on product enhancements. There’s an entry for providing an option to present a divided-plates style of results. It currently has 5 votes. Another proposed idea, demanding to revert to a pre-web model of calculating results by removing query enhancement rules, is up to 614 votes.
There are messy problems with discovery right now, both inside and outside the library. We should be worried about the future, and doing more to combat bias and improve usability. If these difficulties persist, and we fail to distinguish ourselves, the future of libraries could very well be at stake. After all, as the somewhat troubled Leah warns us in Diablo III, “Even in the heart of heaven… angels can still feel fear.”