An Analysis of Memo Akten’s @WordOfMathBias
In 2016, as a part of the Google’s Artists & Machine Intelligence program, Memo Akten built two twitter bots [5]. While the first bot, @WordOfMath picks random words from a pre-trained Word2Vec model and looks at their relationships to each other the second bot, @WordOfMathBias was doing the same thing with a bit of a tweak [6] [7]. The second bot delves into a different realm of exploration, focusing on uncovering potential social biases ingrained within its training data, particularly those related to gender [1] [2]. This bot engages in a thought-provoking exercise, generating random word analogies featuring the words “man” and “woman”. It then tests these analogies in both directions, seeking to expose any inherent biases that may have seeped into the model’s understanding of these terms.
In the second half of the article Akten wrote about his works, before going further deep into the bot’s logic, he highlights the difference between learnt bias and machine bias [3]. Around the same time he did this research, two articles about the societal bias learnt by the models were made public [9] [10]. One important example that was the motivation of one of the articles was when we asked the AI to fill in the blank “doctor-father+mother” with an answer, it surprisingly said “nurse”. This is a sign that the AI has learned some old-fashioned ideas from its training data - ideas that say men are doctors and women are nurses.
“Unfortunately however, this is not entirely accurate.” says Akten [3]. When we ask the AI model questions like “king-man+woman”, it doesn’t always say “queen” is the answer. Usually, the closest word to the answer is one of the words we used in the question (like “king” or “man”). So, when we do this test, we ignore the original words that were asked and look at what the AI actually says. For example, when we ask “doctor-father+mother”, the AI doesn’t just say “nurse”. Instead, it says things like “doctor”, “nurse”, “doctors”, “physician”, or “dentist”. However since the implementation of the model was filtering out the word “doctor”, the second closest word “nurse” was coming out as a result. Akten continues as “It seems the human bias in interpreting results might be stronger than any bias that might be embedded in the experiment or model.” [3]
To explore more into the biases the training data may have without falling in the pitfalls we’ve mentioned above, Akten creates the second bot @WordOfMathBias which explores societal biases, particularly gender bias, by analyzing random word analogies featuring “man” and “woman” and testing both masculine and feminine associations to identify potential biases learned from the training data. After asking the word analogies, the bot tweets top 5 words that comes up. And this way, it reveals that the training data actually has societal bias embedded inside. The bot’s analysis reveals a gender bias embedded in the model, as the second-top result for “woman” is “nurse”, while for “man” it is “physician”, suggesting that the training data contains subtle but telling associations between professions and gender.
Akten wraps it up as suggesting using machine learning models and latent spaces using these models to interrogate our own perceptions and biases by manipulating uniform distributions into structured noise, akin to parametric Rorschach inkblot generators for various domains. And he concludes his final thoughts as “And then we can use the produced artifacts as starting points, as seeds that flower in our imagination, that we see things in, project meaning onto, create stories and invent narratives around, as we have done for millions of years.” [3]
References
[1] Akten, M. (2016). @WordOfMath (2016).
[2] Akten, M. (2016). @WordOfMathBias (2016).
[3] Akten, M. (2016). AMI Residency Part 1: Exploring (word) space, projecting meaning onto noise, learnt vs human bias.
[4] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space.
[5] ami.withgoogle.com
[6] twitter.com/wordofmath
[7] twitter.com/wordofmathbias
[8] Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global Vectors for Word Representation.
[9] Bolukbasi, T., Chang, K., Zou, J., Saligrama, V., & Kalai, A. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.
[10] Bolukbasi, T., Chang, K., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Quantifying and Reducing Stereotypes in Word Embeddings.