The Perpetuation of Stereotypes in Google Autocomplete: Should Google Be Held Accountable?

Over the summer, Google has received some renewed flack for the words that their Autocomplete feature fills into users’ search boxes. While the words that are predicted in the Autocomplete feature are derived from the volume of searches conducted with that exact language as well as your own relevant search history if you are signed in, they are perpetuating some disturbing stereotypes.

In May, Google lost its first case in Germany against a plaintiff who sued the search engine for adding the words “Scientology” and “fraud” to her name. The Federal Court ruled that if Google detects libelous words being added to a name, they must block those terms from showing up.  Another suit in Germany is still being heard in court regarding former president Christian Wulff’s wife, Bettina Wulff, whose name is bringing up autocomplete predictions with words about escort services and the red light district.  Google has also lost similar cases in France and Italy a few years back (read more here).

Every time Google loses another case, renewed attention is shed on this topic and prompts others to continue examining exactly what prejudices are being perpetuated by Google Autocomplete feature and how language impacts what is filled in the box. While these cases highlighted are all about specific individuals, there are also countless instances of stereotyping occurring across a wide variety of searches related to social groups as well.

Lancaster University recently conducted a study of about 2,600 Google search questions and found that the results highly perpetuate negative stereotypes, particularly about black people, gay people and men in general. Buzz Feed did a much less scientific study on a smaller scale and found that searches with questions and statements about women returned mostly negative stereotypes as well.

Given the amount of press lately around gay marriage, I thought it would be timely to try my own experiment and see what is revealed about searches related to the LGBT population. I accessed Google while not signed in to ensure that my personal search history was not influencing what I terms were being predicted for my search.

I started out basic…

autocomplete

I was impressed with this one, but not that surprised given the amount of mainstream news coverage of gay marriage lately. Bravo, searchers who want real facts!

I continued on to more general queries…

autocomplete-1

Alright, clearly Google still knows I’m in Illinois despite not being signed in. Overall, this isn’t that bad of a prediction list. Two red flag Autcomplete predictions came up for this one – “gay test” and “gay jokes.” I clicked through to find out what “gay test” was all about (I know I am making the problem of predicting stereotypes worse by doing this) and it is a page of results with quizzes that test “how gay you are” after they ask you about your stereotypically gay behavior and interests. Gay of Thrones is a Funny or Die video series recapping the Game of Thrones episodes that have been making the rounds after each episode airs, so I’m less surprised to see that one.

I then started to ask some questions in the query box and the stereotypes exploded…

autocomplete-2

Aside from “why are gay rights important,” the rest of these predictions are enforcing some interesting (and contradictory) stereotypes about gay people.

Even straight people aren’t immune…

autocomplete-3

Obviously, these results are troubling (and these aren’t half as bad as some related to race and ethnicity), but ultimately, who is at fault?

If Google Autocomplete is filling in words strictly by the volume of searches in that language, should Google be responsible for monitoring when those searches are offensive or politically incorrect? Some European courts have ruled that the answer is yes, but I am not convinced a court in the United States would say the same thing. The act of deciding what is and is not appropriate would be a large undertaking; lines would have to be drawn by the government, which could very easily be interpreted as government censorship.

But, still should hateful stereotypes be automatically populated into the search box that may, in fact, confirm a prejudice that the searcher was inquiring about? Do search engines have a public responsibility to educate or merely capture the real-time curiosities of searchers?

I think it’s in Google’s best interest to preemptively clean up their Autocomplete predictions. Google has outlined some categories of people that are protected from hate and violent concepts on AdWords (race or ethnic origin, color, national origin, religion, disability, sex, age, veteran status, sexual orientation, or gender identity) – these seem like logical places to start.  It will show users that Google promotes social responsibility while also allowing Google to control changes they make before a court forces them to make more drastic changes later.

What do you think?

autocomplete-4