The Uncanny Valley of Search

“Never attribute to malice that which is adequately explained by stupidity.”
—Robert J. Hanlon

There’s a point at which special effects, robotics, and the like advance beyond what is obviously a clumsy attempt at mimicking for example the appearance and mannerisms of a live human to become a good enough approximation of them that oddly enough the overall effect is all the more off-putting, by virtue of how close to reality the artificial version now is, precisely because it remains not totally yet on the mark. This is called the “uncanny valley.” The advent of hyper-realistic graphics brings us out of that dip, into accepting if not failing to notice that things still aren’t quite right with, the sorts of CGI characters weaved into Rogue One and The Irishman.

Artificial intelligence, or what is at least called that, remains amazingly stupid in many ways. Software has been written that can beat any human alive in board games such as chess and Go, yet on the other hand I can search on a shopping site for an exercise kettlebell, only to be offered kettles for my kitchen amidst the results:

And look at how the Google translation program, which has apparently been trained a bit too much with source texts containing sexist gender norms, renders a string of non-gendered pronouns into English:

Whoever programmed Google Translate to produce these results I’d say cut some corners. They could have left the default pronoun translation as “he/she” but decided to jump the gun a bit as far as entrusting the AI to make a fair choice goes. And so we have a continued occurrence of fundamental mistakes being made by what have become in many regards very advanced computing processes, search engine results ranking algorithms in particular, that stick out like a sore thumb.

Thus the uncanny valley of search we have today. It remains an open question if we’ll ever climb out of it. Despite recent advances, a reliance on humans to interpret and filter results seems necessary for the foreseeable future. This is just as well, since computers and humans in some ways have complementary strengths and blind spots.

Another big impediment here is the existence of forces that are not at all motivated to provide pertinent or high-quality results. It’s not just blatant scam sites that do this, either. The fundamental purpose of a platform such as YouTube, as far as who drives its revenues are concerned, is to maximize engagement, not the spread of truth. This isn’t a coding problem. As someone dedicated to promoting a more objective perception of reality, it certainly feels like a David and Goliath type of situation.

It’s not just technology that is revealing it’s limitations, but a related flaw in human nature as well. Terry Goodkind explains it in Wizard’s First Rule:

People are stupid; given proper motivation, almost anyone will believe almost anything. Because people are stupid, they will believe a lie because they want to believe it’s true, or because they are afraid it might be true. People’s heads are full of knowledge, facts, and beliefs, and most of it is false, yet they think it all true. People are stupid; they can only rarely tell the difference between a lie and the truth, and yet they are confident they can, and so are all the easier to fool.

The blissful ignorance that comes with residing in an echo chamber is a vicious circle that modern commercial Internet platforms seem to promote. False narratives have always been around. The imprisonment of Galileo for asserting the Earth goes around the Sun should be the kind of story left in ancient history, until you realize how we’ve somehow looped back to the firing of a teacher for assigning an essay which acknowledged racism exists or the staggering fact that most Republicans think Trump won the 2020 election.

I’d probably be a lot happier if I didn’t think global climate change was happening. However, I also have an overriding desire for humans not to go extinct. As much as there used to be a challenge in finding the right results back in the days of telnet catalogs and command line databases, today the larger difficulty appears to be getting people to accept hard facts rather than having them willfully pulling the wool over their own eyes—a practice which contemporary search engines, as they’ve been engineered, are more than happy to accommodate.

Further Reading

--

--

--

Librarian at Washington University in St. Louis

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Children, Artificial Intelligence and Privacy Concerns. Where do We Stand Today?

The Extentia Blog: The Future of Artificial Intelligence — 2019 and Beyond

Inception Institute for Artificial Intelligence (IIAI) and Petuum Partner for Advanced AI

Doing good better with AI

How to Power Your Future Business with Artificial Intelligence

Artificial-Intelligence

The Appreciative Inquiry Summit; Explorations into the Magic of Macro­Management and Crowdsourcing

Neuroforecasting, when science mingles with technology.

From Data to Decisions

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
John Hubbard

John Hubbard

Librarian at Washington University in St. Louis

More from Medium

Evaluating randomized algorithms on randomized problem instances

Flatten a multi level DLL

A brief history of machine learning

Gentle introduction to semantic versioning