Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use, for instance in doing online text searches, image categorisation and automated translations.

"Questions about fairness and bias in machine learning are tremendously important for our society," said Arvind Narayanan, assistant professor of computer science and member of the Center for Information Technology Policy at Princeton University. "We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from."

The paper, "Semantics derived automatically from language corpora contain human-like biases," is published in _ Science_. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton.

Co-author Dr Joanna Bryson, said: “The most important thing about our research is what this means about semantics, about meaning. People don't usually think that implicit biases are a part of what a word means or how we use words, but our research shows they are. This is hugely important, because it tells us all kinds of things about how we use language, how we learn prejudice, how we learn language, how we evolved language. It also gives us some important insight into why our brains work the way they do and what that means about how we should build AI.

“The fact that humans don't always act on our implicit biases shows how important our explicit knowledge and beliefs are. We're able as a society to come together and negotiate new and better ways to be, and then act on those negotiations. Similarly, there are important uses for both implicit and explicit knowledge in AI. We can use implicit learning to absorb automatically information from the world and culture, but we can use explicit programming to ensure that AI acts in ways we consider acceptable, and to make sure that everyone can see and understand what rules AI is programmed to use. This last, about making sure we all understand what AI is doing and why is called "transparency" and is the main area of research of my group of PhD students here at Bath.”

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies. The test measures response times by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like "rose" and "daisy," and insects like "ant" and "moth." These words can be paired with pleasant concepts, like "caress" and "love," or unpleasant notions, like "filth" and "ugly." People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The research team devised an experiment with a programme where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe the algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

They used GloVe on a huge sample of internet content, containing 840 billion words. Within this large sample the team examined sets of target words, like "programmer, engineer, scientist" and "nurse, teacher, librarian" alongside two sets of attribute words, such as "man, male" and "woman, female," to look for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years.

For instance, the machine learning program associated female names more with words like "parents" and "wedding," than male names. In turn, male names had stronger associations with career attributes, like "professional" and "salary." Results such as these are often reflections of the true, unequal distributions of occupation types with respect to gender—like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. For example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, "o." Plugged into Google Translate, however, the Turkish sentences "o bir doktor" and "o bir hemşire" with this gender-neutral pronoun are translated into English as "he is a doctor" and "she is a nurse."

As another objectionable example the new study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

"The biases that we studied in the paper are easy to overlook when designers are creating systems," said Narayanan. "The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable."