User Testing Library Search: What’s the Point?

“We must have strong minds, ready to accept the facts as they are, and to make bold, new plans based on those facts.”
Harry S. Truman

Alchemy, Phlogiston, and Library Science

The unjustified belief in traditional Chinese medicine is a reason why tigers are endangered, likewise why albinos have been murdered in Africa, and why my own health insurance premiums cover the reimbursement costs of snake oil sold under the illogical pretense that water has memory, and that the more you dilute something the stronger it gets.

Human culture is rife with traditions like throwing salt over your shoulder, other pointless sacrifices, and old wives’ tales that have no basis to justify such behavior. Maintaining the ceremonial symbolism of such practices can be meaningful for tradition’s sake, but when you have the equivalent of divining rods being used in a work environment, be it a hospital or a library, that’s bad.

My favorite college professor told me once, if you really want to get to know someone, don’t look at what they like, rather find out what really pisses them off. For me, as an information professional, it’s how people distort facts to support their ideology. When my mom forwards me the latest “Obama’s a Muslim” chain e-mail, it bothers me, not just as the black sheep Democrat of the family, but because it’s so obviously untrue.

Instead of warping reality to match our preconceptions, we need to take a more scientific approach of acting based upon observed phenomena. This cycle of observe-orient-decide-act is the definition of evidence-based practice and the purpose behind user testing.

Companies do this all the time. Think of marketing focus groups, and it’s also why you see those pop-up surveys. There’s a commercial interest in knowing consumer tastes; a company will either be a success or go out of business depending on how well it meets those needs. Libraries don’t often deal with spending behaviors, but to fulfill our mission, we need to learn how to best serve our members.

The affordance of library research tools has changed substantially from early OPACs and federated search engines. As we move towards discovery interfaces and linked data, what were once purely technological impediments are giving way to commercial interests, filter bubbles, and other delivery models that detract value from accessible content. To give library users the best search experience, it remains important to make improvements based on what doesn’t work for them.

Know Your Limits

There is a skewed user base in all of this. It can’t be helped. We have to extrapolate, the best we can, from a self-selected group of participants that is hopefully somewhat representative of the entire population we serve. This is especially problematic when dealing with squeaky wheels. I don’t just mean that single faculty member who has the director’s ear, and if they don’t like something then nobody does, but also the part of your population that frequents a service desk to ask for help. This is the (thankfully in our case dwindling) subset of customers that can’t figure things out on their own. It is safe to assume that the now majority of people who use the library autonomously have less difficulty navigating its interfaces.

In usability studies, subjects are placed in an artificial environment and know they are being watched by people whose products they are asked to critique. Subjects might deliberately or unconsciously adjust their behavior because of this. Those with an axe to grind may similarly be disingenuous about how difficult a website is to use.

We’re also not designing and maintaining a search engine from scratch. There are limited customization options available in vendor-supplied interfaces, which are nowadays rapidly developed and frequently updated. Customer groups even vote on feature enhancements that they would like to see programmed into future versions. This is certainly the preferred way to have adjustments made to licensed products that we pay for. Unless you have in-house developers with spare time, installing add-on scripts that rewrite a system’s core features may create an unsustainable workload.

We don’t, for the most part, customize our ProQuest, EBSCOhost, and other search platforms. A new discovery system is more akin to these hosted services than a local catalog needing custom branding. Just as many libraries fit within a larger organizational mission (that may even mandate a certain website template be used for the greater good), we should start looking beyond our own individual library needs and promote the development of more usable interfaces from content providers, without the need for further tinkering on our end.

All of this isn’t enough to pack it in and not bother, but reasons why we should consider the limitations of any results from user testing. It is best to focus efforts on features related to the basic settings with regard to wording choices, simple styling, and existing configuration options.

Check Your Prescriptivism

Given the puzzling jargon and vendor names that we have to work with, it’s obvious that you can’t make anything totally idiot-proof. Our fire alarm goes off about once a week because someone doesn’t notice the big red “Emergency Exit Only” sign on the door. When a significant portion of your user base is using a website in what you call the “wrong” way, however, it’s an indication that the design itself is what’s faulty.

It can be a delicate matter of working with designers to alter their original artwork or wording choices. Librarians in particular can be overzealous about the details of a website’s content and presentation. That is the very reason why you get data from user testing to serve as objective supporting evidence for such requested revisions.

What People (Should) Want

Other patrons ask where to execute an advanced search query based on Boolean logic. One of the benefits of a discovery layer, compared to an OPAC, is that in addition to merging articles and books in a single search engine, there’s ways to refine your results. These post-search delimiters are a more efficient way of searching: they’re not always needed, and you can adjust results without having to redo a search completely. Using a new system like this makes for a better search experience, but only once you do it right.

This is an example of how there’s a lot in the library realm of research that is not intuitive, and only makes sense once you learn it, or at least jettison research methods from the card catalog era. A library search interface needs to strike a pragmatic balance in responding to user needs and preferences while also molding efficient behavior, given the available technology.

Logistics

To recruit the best sample population, it’s best to give people something meaningful (e.g., free food and/or gift certificates) to compensate them for their time. At UWM, the procurement process involved a fair amount of red tape, but we were eventually able to offer gift cards to volunteers. We then advertised the study via signs in the library and social media.

We were fortunate to have a campus Usability Testing Laboratory available to conduct the sessions. We worked with them to prepare a list of tasks for users to perform, scripts for introducing the study’s purpose and debriefing subjects, and qualitative follow-up questions.

Moving Forward

One college doesn’t even call it Search@UW, for example, while our flagship campus has sunk costs in maintaining a separate system in parallel. It is hoped that the usability testing results will help promote more consistent and efficient efforts to streamline work on our common system.

Any experiment can be subjected to nitpicking and biased interpretations. Rather than being predisposed to discount data that you might not agree with, it’s important to accept results and take positive action based on them, even if they don’t support your design philosophy. Otherwise you’re just practicing quackery.

Further Reading

Librarian at the University of Wisconsin–Milwaukee