The Discovery Dark Ages: How Filter Bubbles, Dark Patterns, and Algorithms Propagating Bias Impede the Spread of Knowledge

John Hubbard
23 min readSep 13, 2019

§ Sherpas and Chimpanzees

“It is common sense to take a method and try it: If it fails, admit it frankly and try another. But above all, try something.”
—Franklin D. Roosevelt

Consider the following thought experiment: Pretend you put five monkeys in a room, along with a ladder and bananas that are tied to the ceiling. Our fellow primates are smart enough to use tools, and so soon enough, one of them starts climbing the ladder to reach the bananas. At this time, however, an alarm sounds and the entire room is sprayed with ice cold water.

Operant conditioning being what it is, the five monkeys quickly learn not to climb the ladder. You then take one of these monkeys out of the room, and introduce a new one. The rookie monkey will, of course, attempt to climb the ladder, but will also, just as certainly, be stopped in various ways from doing so by the four veteran monkeys.

This substitution process can be repeated until you have all rookie subjects in the room, and by now you can also disconnect the alarm and water hose, because none of the monkeys will try and climb the ladder, although they don’t really know the reason why. Which leads me to the seven most annoying words I routinely hear at my workplace: “But we’ve always done it that way.”

Twenty years ago, it was in the context of my removal of the link to the telnet version of the catalog from the library homepage. And many times since, whether was about giving federated searching a fair shake, converting paper forms to their online equivalents, providing a chat reference service, ditching an approval plan in favor of patron-driven acquisitions, and especially lately, promoting the different research methodologies necessitated by adopting new discovery systems, those seven dirty words—and all of the contentiousness that comes with them—keep resurfacing.

We should continually ask ourselves what we’re doing for the profession to break the cycle of unwarranted complacency as typified by those poor apocryphal monkeys. Change can be a good thing. Now, that said, there are ways in which search engines are being used to deliver results nowadays, regardless of whether people want them or not, and even if they should be desirable sources in the first place, which are not the best ones, in terms of relevance, quality, and accuracy.

“Often, the less there is to justify a traditional custom, the harder it is to get rid of it.”
—Mark Twain

Before I get into that, I do want to touch on our resilience to new ideas. Many of the criticisms directed at them boil down to little more than, “that’s not how it used to work,” and a resulting unwillingness to take proper advantage of available technology. The mentality behind the claim that further changes to the methods employed for obtaining information are too risky, presumably because everything works well enough as it is now, is the same line of thinking which resulted in the original Dark Ages, after all.

Discovery systems are a means to an end, and in this case, when a superior means emerges, there’s simply no reason not to jettison the old way of doing things. This requires some adaptability on our part, and a willingness to unlearn habitual yet inefficient procedures in favor of better ones.

Post-search delimiters have largely replaced the need to use Boolean operators and other archaic search syntaxes, for example, while those three hundred different databases we at times need to rely upon for conducting systematic research, along with why this necessitates our playing a role as instructors on how to use such varied and often cumbersome native interfaces, as for the ongoing efforts to provide easier ways of searching, we have a tendency to protest too much against those changes which would render the value we provide in that process obsolete.

There is consequentially a growing disjoint between how non-librarians prefer to find information and the ways that some of us insist are still the best. Yet we cannot retreat to the reference desk and simply shake our heads at students who are off using Wikipedia. It’s proven much more productive, in this case, to educate people about the benefits and vulnerabilities of relying upon Wikipedia for research.

“Helping people sail through misinformation rather than being the only provider of information is the job of the Library.”
—Kathleen Smeaton

Imparting the procedural knowledge required to use a library gave us not only job security but also a sense of purpose. Today we are much less gatekeepers, however, and instead focus on promoting critical thinking skills and the like. It’s no wonder that user-friendly platforms which provide unmediated access to information, along with the removal of those impediments that we built our careers working around, can therefore be viewed as threatening by some of us accustomed to traditional practices.

This is not to say that some of the criticisms made against emerging technology aren’t at times justified. We should be wary of anything presented as a solution in search of a problem, because in the long run it will likely do more harm than good. Federated search, in retrospect, and especially compared to today’s interfaces, was a horrible tool for searching, even though it laid the groundwork for the kind of integrated discovery layers we have today.

As someone who works with relatively rapidly-changing information technology, I experience a fair number of frustrations myself, because I like having things discrete and nailed down, yet due to the nature of my job, there’s invariably something that comes out of left field and pulls out the rug from under me during the middle of a project I’m working on that makes us have to go back to the drawing board and reconsider what I had hoped was finalized.

Nothing in web librarianship is set in stone. Nor should it be, because you need to continually gather new evidence and reassess what you’re providing for patrons. Realizing that we don’t need to be so tied to the past, as we make genuine efforts to provide a better means of obtaining research, can be as liberating as it is heartbreaking.

There’s also an awkwardness to new technologies that makes things a little too polished, in a way. Think of the rampant use of Auto-Tune when it first came on the scene; those oversaturated HDR images that are certainly eye-catching but also look incredibly fake; and the uncanny valley which animation and video game graphics are just now coming out of the other side of, as the advent of deepfakes has demonstrated. Could some library search tools, similarly, be excessively slick?

“There’s another dumbass on the mountain”
Those Darn Accordions

Whether or not we are simplifying search too much, at the cost of sacrificing quality for convenience, is a valid concern. In our desire to create elegant, idiot-proof interfaces that adhere to the adage of “don’t make me think” and thereby not confuse novice users, it may also cause us to provide resources which inordinately frustrate advanced researchers. Powerful technology can be dangerous without the proper training. Think of the latest celebrity to wrap their high-powered race car around a telephone pole, likely out of a failure to understand how a Lamborghini will not accelerate like a normal vehicle.

There are times, moreover, when offering too much assistance can even be a net hindrance. The hundreds of corpses on Mount Everest, many expert climbers have argued, are there because, aside from the storms, icefalls, and overcrowding, out-of-shape aristocrats are using porters and other aids to ascend to where they obviously don’t belong, as made apparent by the fact that they freeze to death. “Never use supplementary oxygen tanks in the Himalayas” is a maxim that could be restated within our field as, “Don’t use a simple search box for scholarly research,” at least without first understanding what exactly it does.

We’ve all seen people mindlessly clicking on links without reading the accompanying instructions. The same respect and attention that an implement such as a table saw demands you pay to it, along with the requisite knowledge about how the device you’re using operates, should also apply to those seeking, for example, medical expertise without stumbling upon and taking as gospel the assertions found in an anti-vax Facebook group.

“There are more powerful and complicated search tools that are fiddly to work with but if you master them, they can do amazingly.”
Aaron Tay

There are certainly times when a discovery layer is not the best or only tool for the job, such as with performing comprehensive research, which must still be supplemented, along with subject databases, by citation chasing, browsing, trawling through social media, tracking down grey literature, and locating other unpublished data. Human labor and expertise are not yet entirely disintermediated from those methods.

My discovery process at our local public library involves me looking at their new books shelf and finding items somewhat serendipitously. The old-fashioned ways of doing things are at times, believe it or not, perfectly fine. Certain skeuomorphic designs may also actually be more user-friendly, as evidenced by the United States Navy’s recent decision to replace their new touchscreen controls with older mechanical ones.

In some cases, modern search interfaces are the equivalent of those warship touchscreens. While evolving discovery platforms continue to modify the ways in which they accommodate, and thereby mold, a variety of information-seeking behaviors, efforts to improve upon the search process face a growing threat in the form of innovations that may have unforseen negative consequences, namely by deliberately curtailing a user’s ability to be provided with optimal results.

Our situation is analogous to having Icarus, in the library, with the discovery tool. Yet as the predominance of library-less research platforms and their impact demonstrates, we can no longer mandate that only smart patrons be allowed to make use of dumbed-down search engines. We simply need safer tools.

§ The problem with capitalism…

“The dose makes the poison.”
—Paracelsus

Custom-tailored search engines essentially gerrymander your results by making them not properly representative of what’s out there. Presenting such a skewed representation of reality can be dangerous, such as by reinforcing confirmation biases, so it seems prudent to deliver only the minimum effective dose of what is required to receive the benefits from any such features, and err more often on the side of giving people the most authoritative answers to their questions rather than just telling them what they want to hear.

Commercial products built to maximize repeat use, regardless of the accuracy of what is retrieved, are not bound to such considerations. There are times, furthermore, when even using an unpersonalized interface does not necessarily mean its results will be objective or fair. Pitted against our efforts to provide better relevancy rankings is the business of search engine optimization, especially when carried out by those to game the system so that it presents skewed results in their favor to unduly increase exposure.

But before we get more into that co-evolutionary arms race, let me back up a minute and review some of the underlying assumptions that we need to make if we’re going to be discussing “good” or “bad” results and what those value judgements entail. This is complicated because judging relevance can be different for identical queries based on the viewpoint of who’s doing the searching.

“PHILOSOPHY: Basically, this involves sitting in a room and deciding there is no such thing as reality and then going to lunch.”
—Dave Barry

We can at least agree, hopefully, that there is such a thing as an objective truth. Chairs exist and trees exist, for instance, and a book published in 2015 was released after another book first printed in 1978. When it comes to more inherently subjective determinations, such as if and how much the book from 2015 should be weighed in calculations, as being more desirable due to its recency, for a search results display of results ranked by relevance, this is obviously one area where computers are at times dumber than us.

The fact that a half-dozen book reviews can show up in a title search using an integrated discovery layer, ahead of the record for the book itself, illustrates how there’s still a lot more fine-tuning needed in calibrating how those results are ranked. The algorithms in this case aren’t doing anything wrong, per se, in that they’re behaving exactly the way they’re programmed to, which is by matching queries with results, and a book review is obviously going to do that.

Our choice of language can also create tricky situations. There have literally been congressional debates over how libraries should describe materials about unlawful immigration. The body of water commonly known as the “Sea of Japan” is sure not what Korea would have you call it. And maps of a country labeled “Palestine” are part of my library’s “Holy Land collection.”

Even quantifiable information can be called into question, depending on its source. Examples of this include controversies involving purported celebrity heights, supposedly of-age gymnasts who were in fact missing their baby teeth, and people alleged to be the oldest in the world who also just so happen to not have any supporting documentation. On the other hand, should we always trust peer-reviewed sources? Catering to that sort of publication bias is not the best idea, either. Any well can be tainted.

“And it won’t make one bit of difference, if I answer right or wrong…”
Fiddler on the Roof

Some degree of uncertainty is inevitable, and one shortcut we take in lieu of verifying everything ourselves is to rely on reports from others. Anyone who’s ever read an online review is a testament to this, never mind the fact that the review could have been a fake. Especially on the Internet, where you can do things like ballot stuffing, it’s evident that we need to move away from a consensus-based model of truth to a more empirical one.

Safiya Noble describes the problem in her renowned book Algorithms of Oppression: “What is most popular is not necessarily what is most true.” Unfortunately, Google has no way, when you think about it, of making observations and determining validity. It can only calculate what’s popular, and also, based on your viewing history, which links you’re likely to click on. That may be good enough for relevance, but for people seeking substantiated information, we need to look for genuine impartiality elsewhere.

Consider how, aside from contributing to multiple epidemics of preventable diseases, Andrew Wakefield’s article suggesting how a link exists between vaccines and autism has a tremendous impact factor. Newt Gingrinch, likewise, once implied that crime was going up, not based on any statistical evidence, but only because people believed it was increasing. This is a pretty loony way to view the world, and it doesn’t help any that we have a multitude of enterprises actively trying to manufacture such distortions.

This may come as a shock to some of you, but there are individuals out there spreading information whose motives are somewhat ulterior to the betterment of society. For example, my name is listed in our online directory as a contact person for questions about the website. I have as a result received, over the years, hundreds of e-mails asking for me to add a link to a page from one of our resource guides.

A few messages have purportedly come from other educators, including one which went so far as to make a website for their fake school and claim that, “Two girls in my class found your page […] They wanted to share another page that they really loved.” Pages such as these might nominally contain some pertinent academic information, but they’re of on average dubious value to our user base and often contain information that’s duplicated elsewhere. One site, for example, featured a text dump of an old edition of the CIA World Factbook.

People don’t set up these kinds of sites out of the goodness of their hearts. It’s a common SEO tactic to create such a page, boost its PageRank by advertising it on link farms along with ideally reputable sources such as library sites, and then fill it with ads so that it can be monetized. Not that commercial scholarly publishers aren’t in it for the money either, ultimately, and we subsidize their parasitic business model in many cases quite willingly, condoning a system that shares knowledge merely for a maximized profit.

“The best minds of my generation are thinking about how to make people click ads. That sucks.”
—Jeff Hammerbacher

The Amazon.com website has long been compared to library search tools, with some chagrin, namely by being identified as an easier way, instead of relying on catalog searching, for even library users to locate what they’re after. Some things are certainly made super easy at this site. Amazon, after all, owns the patent on 1-click ordering. It is similarly a quick process to enroll in a trial of Amazon Prime (so you can have your shampoo delivered overnight, without having to worry about the ridiculous human and environmental costs involved in such a transaction).

By default, new trials roll into a regular subscription costing $119 per year. In order to change your trial to not do this, you must navigate and click through four different nag screens to finally obtain a confirmation that your trial is set to expire rather than automatically become a paid membership. The relative difficulty of doing this, which I’m sure we’ve all dealt with variations of, is obviously intentional.

Another famous example is an old recording of a half-hour phone call that someone made to America Online, which at the time paid their customer retention representatives bonuses for talking people out of closing their accounts. Companies use similar tricks with sales pricing, gamification, deceptive packaging, and other stuff that’s now so commonplace that we don’t really notice it anymore, like the rampant sexism of yesteryear through today.

As smooth as technology can make certain processes, when we see it applied by a capitalist enterprise, it’s not exactly used evenly and consistently to make everything so effortless. Anyone who’s dealt with e-book DRM headaches, or one of those titles still out there which are licensed with simultaneous user limits, knows how there are now contrived impediments to us being able to do our job, in facilitating the dissemination of ideas, not because of limitations due to bandwidth constraints or anything like that, but rather simply due to the fact that content providers are trying to make money.

Happening in parallel with the manipulation of transactional customers is the growing and concerning use of mass surveillance, such as with facial and voice recognition technology, to erode our privacy rights. In the way that “cloud computing” is actually just a server in someone else’s basement, it’s coming out that the supposedly artificial intelligence agents processing many commercial voice recordings may be little more than a Mechanical Turk, in the case of humans listening in on people talking to Siri, Alexa, Cortana, Google Assistant, Facebook Messenger, and Nest.

This is something to keep in mind as voice commands make an entrance in the library marketplace. Will we put up with all of these limitations? After all, nobody ever went broke underestimating the intelligence of the American people, and it seems impossible to incentivize a capitalistic enterprise to stop doing what makes its owners the most money, at least within our current economy. As libraries, then, it falls to us when it comes to presenting unfettered search results. This is a challenge, as we increasingly employ automated commercial products built upon defective foundations.

§ Bias, in my discovery layer?

“If machine-learning models simply replicate the world as it is now, we won’t move toward a more just society.”
—Meredith Broussard

I’ve been a librarian long enough to learn that having blind faith in a vendor is rarely warranted, and this is certainly the case when it comes to the proprietary code used by discovery products to choose how results are ranked. The recent book Masked by Trust: Bias in Library Discovery by Matthew Reidsma documents the failings of several major integrated library systems for their reliance on outdated and in some cases just plain bigoted sources of information.

The problem is that, even with library service platforms, occasionally relevancy ranking presents biased results because the program is learning (e.g., by examining the co-frequency of certain words) from prejudiced texts. Although not necessarily malicious, because again, such a piece of software is doing exactly what it was written to do, whatever the root cause, it doesn’t exactly inspire confidence in these products.

Discovery systems rely upon applications to tackle a workload involving renormalization, deduplucation, FRBRization, and lots of other behind-the-scenes calculations (plus perhaps even some degree of descriptive cataloging) that is simply not possible, at the scale of the numbers of records involved, to accomplish with human labor. Unleashing robots on human discourse as a way for them to learn our language, however, is how you end up with a slew of racist AI agents, including a psychopathic bot that had ingested the comments from a macabre-themed subreddit.

Why our vendors make certain choices is not always entirely clear. For all of their talk about improving discoverability and “glancability,” for example, Ex Libris recently and inexplicably reverted the past decade or so’s worth of progress towards achieving a truly integrated discovery layer by signing content licenses with newspaper publishers (or at least I assume there is a contractual reason for this change, since they are customarily tight-lipped regarding why this development decision was made) to provide their records only if clients first break Primo’s core functionality of offering a single search box by removing all newspaper results from the main index, in favor of a segregated interface wherein there are, “newspaper collections that are only available to Newspaper Search.”

Anyway, surrendering to machines the work that used to be completed by human catalogers, although arguably necessary, appears to be in some cases done rather haphazardly. Giving such power to computers can be extremely dangerous. I’m not saying the singularity is near, but it’s not outside the realm of possibility to someday have a powerful computer misinterpret a well-intentioned command (“eliminate human suffering”) in such a pedantic way that the desired outcome is achieved through an undesired means (“kill all humans”).

“This is not the sort of thing that is one engineer’s fault. When something goes this drastically wrong there have been many poor decisions made over a long period of time. A single human error is never the root cause.”
Tom Scott

Lest you think this sounds like something out of Star Trek, concerns about Roko’s basilisk, the Daleks, and grey goo aside, the god complex exhibited by software engineers today is an all too real phenomenon. The existence of algorithmic bias, and by that I mean the demonstrated tendency for software agents to reiterate the stereotypical patterns of the past—whether it be in the tools used for predictive policing, providing healthcare, or assisting library research—is not the kind of thing that happens because of a single missing semicolon in your programming code. It’s the result of a massive and systemic failure to recognize the dangers of automating what’s not ready to be automated, at least without the proper failsafes.

Consider how the naming and marketing of Tesla’s “autopilot” feature may be partially responsible for at least one death, of a driver who felt confident enough to be watching a movie rather than monitoring what the car was doing. Anyone with allergies knows the frustrations of not being able to override a dysfunctional system. The crew of the doomed 737 MAX flights likely felt similarly, as their anti-stall safety feature crashed the planes into the ground.

Some people, on the other hand, don’t even realize that websites are presenting to them customized content, based on their meticulously-constructed profile and tracking history, or the degree to which our results may be filtered. Google deliberately shows misleading ads, censored listings, is subject to takedowns or “right to be forgotten” laws, and has flat out delisted other sites with little due process.

We used to worry about the “Digital Dark Ages” resulting from uninteroperable file formats or government censorship. Those concerns will always be legitimate. Moreover, we are now inundated with technology which, while necessary to manage the breadth of information being produced, is fast moving far beyond our ability to keep up with. As this technology has gained ground by playing a greater role in our lives, and while we are playing catch up with the subsequent ramifications, it has also in a way failed upwards by becoming more prevalent but doing a less good job at it. So what are we humans to do?

“I honestly think you ought to calm down, take a stress pill, and think things over.”
—HAL 9000

Looping back to my admittedly roundabout introduction, I wanted to include that bit about the monkeys to make it clear that I’m just about as pro-tech as they come, especially in the library world, where initiating new services in such a change-averse environment often leads to the stifling of innovation due to our unwillingness to accept reasonable risks. Yet recent developments have caused me to stop and wonder if we are now embracing technological innovations which really no longer improve the research process.

We live in an era of influencers and other shills, clickbait, push polls, malware, troll farms, armies of bot followers, Google bombs, and so many other things that inundate us in their perpetuation of misinformation and bigotry that they seem overwhelming enough to accept it all as the new normal. Times are tough. This makes the imperative to admit to our own logical fallacies and inconsistent beliefs, along with the ability to identify propaganda when we see it, all the more important.

Much like the whole “fake news” thing exploded a few years ago, the topic of our reliance on potentially bad data sources has garnered significant media coverage recently, at least within my own little echo chamber, to the point where it’s made finishing this essay an almost Sisyphean task. As something of a techno-triumphialist myself, I’ve found it a hard concept to come around to. Of course, admitting you’re the beneficiary of a rigged system is also a little uncomfortable. There’s sadly something to that saying about how, “When you’re accustomed to privilege, equality feels like oppression.”

As easy as it is to resort to scapegoating and self-victimization while disregarding our reliance on others, we shouldn’t need to feel defensive about practices that are inherently wrong. There’s got to be a better nuanced and pragmatic balance between the unrealistic idealism of “everything should be free and verified as truthful” and a cynical fatalism that “we have to work with all of this crap because it’s how capitalism functions” than blithely perpetuating the status quo by abdicating our responsibility to provide unbiased sources of information.

Or maybe I’m just getting old. With respect to that proverb about how, “If you want to go fast, go alone. If you want to go far, go together,” I’ve found it also true that if you want to go farther, such as with pacing a longer running distance, you have to go slower. And so while I still accept that no new system is without flaws, my ambitions to “move fast and break things” (to quote someone who’s certainly broken plenty of them) are waning, in favor of a desire to take more time for quality control, not to mention self-care.

§ The Last Upgrade

“An algorithm’s only as perfect as the person designing it.”
—Arkady Martine, A Memory Called Empire

Well-intentioned efforts may at times have a backfire effect, perhaps even to the point of actually making matters worse. The increased availability of Narcan could be a contributing cause in the corresponding higher number of drug overdose deaths, for instance, while it’s also of note that several trees in one of the biosphere projects failed to grow due to a lack of stress from any wind to promote the development of strong foundations.

As librarians, we need to teach people how to properly recognize and cope with the massive amount of propaganda in their lives, but do so in a way that emphasizes the need to not just tolerate it, and definitely not just accept it, but rather challenge the reasons for its continued existence. Anything less is inconsistent with our mission.

It’s appealing to hide out in our dusty stacks, fetishize the printed word, and write off the advances surrounding us as imperfect and therefore automatically inferior. Although we have an innate tendency to derive a sense of accomplishment from doing something that’s difficult, this shouldn’t entail pursuing tasks which are needlessly cumbersome. Making excuses for hard-to-use interfaces, along the lines of rationalizing how bad-tasting medicine must therefore be more effective, ignores the reality that they’re not necessarily the best path towards efficiently obtaining quality results.

It used to be stated that computers clearly cannot think for themselves because they can’t beat a human at chess. Those goalposts have been moved many times since, and machines keep getting better at mimicking a range of human behaviors. For now, a hybrid approach to using computers in tandem with superior human abilities, when applicable, a la “centaur chess,” appears to be the preferred approach, as the resurgence of human recommendation agents illustrates.

Greater machine intelligence is certainly needed when it comes to vetting objectionable content, especially given the lack of editorial control on the web. While it’s largely a good thing how, in just a few generations, we’ve gone from only white landowners being allowed to vote to standards and practices prohibiting the showing of married couples’ beds to people having the freedom to live stream mass shootings, rapes, suicides, and the like, if you want to reliably filter out the broadcasting of the latter sorts of things, we currently need to have humans review such content, which is not the best job for them to have.

“The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.”
—Warren Bennis

We will eventually meet our match. Strong AI is coming. I think there’s some conceit involved in admitting that robots can surpass our physical abilities and computational speeds, but somehow cordoning off our higher mental faculties as impossible for a computer to ever surpass. Call it phrenology as applied against silicon. At some point, all of the human error rates, ego defense mechanisms, faulty deductive reasoning, and other pitfalls that we bring to the process of seeking the unfettered truth and perceiving objective reality will become a larger impediment to accurately representing the facts than are the abilities of our fledgling artificial counterparts, false matches and all.

Machines learning from humans today will inherit our ugly histories. We have in a way replaced their veil of ignorance with more of a “garbage in, garbage out” sort of situation. Can this be changed, so that they learn to identify and filter out such imperfections? Consider how graphics software can now sharpen images and even remove unwanted elements. Could an analogous mechanism be used to someday eliminate bias? I don’t mean merely airbrushing the past, but halting the perpetuation of prejudice. Perhaps that’s a hubristic techbro desire of my own, yet given the harsh realities of human nature, how else would we achieve a world with less injustice?

This is of course a prediction, with a fair amount of hand waving to boot, so it’s obviously an open question if computers will ever be able to really think for themselves. With enough processing power and the proper programming, a computer who can replicate all of our creativity, wisdom, empathy, spirituality, ethics, and then some may yet emerge. We should at least be prepared for the day when that hackneyed phrase about Google bringing you a billion hits but only a librarian will deliver the right one, for even the most skilled librarians, to no longer be remotely true.

And what an interesting time that will be. Until then, we have to realize that we’re not there yet, and there are significant dangers of entrusting our tools, as they’re still being developed, with too much power. There’s no reason, though, to not be optimistic, as long as we’re working to ensure the process goes smoothly. In the words of Sara Connor, “The luxury of hope was given to me by the Terminator, because if a machine can learn the value of human life, maybe we can too.”

Further Reading

--

--