An Analysis of Memo Akten’s @WorlOfMath

In 2016, as a part of the Google’s Artists & Machine Intelligence program, Memo Akten built two twitter bots [5]. The first bot, @WordOfMath picks random words from a pre-trained word2vec model and looks at their relationships to each other [6] [1]. In addition to his works, twitter bots, he also published three articles about an in depth explanation of inner workings of this model, previous researches and his thought process [2]. Following the articles, divided into 3 parts by the artist himself, opens a window to the artists head, walks the reader through the steps of a good artistic research and makes the reader think through several concepts that wraps around the topic of AI and large language models.

The second of the three articles focuses mainly on the bots. Akten writes about the concept of Word Embeddings, the building blocks of large language models. Word embeddings are numerical representations of words, learned from large datasets of text, where each word is associated with a unique vector that captures its semantic meaning and relationships with other words [3] [4]. Word embedding models generate these vectors by analyzing patterns in language, allowing them to be used for various natural language processing tasks. As he continues he gives this example: “We can write this as the famous word2vec example: king - man + woman = > queen … read as : “man is to king as woman is to ?” and the model returns ‘queen’” [2].

Then he continues by talking about the artwork he has made, @WorldOfMath. This work, which is a twitter bot, uses word2vec to plot random words in a high-dimensional latent space, applies random arithmetic operations to generate new vector locations, and tweets the closest words to the resulting positions [3] [4]. While the model generates a lot of very interesting outcomes that would lead the audience to think many philosophical questions, the artist criticizes his own work with these words: “Nevertheless, I find these results endlessly fascinating. Not because I think the model has such a strong understanding of the English language, but because it acts as a kind of ‘meaning filter’” [2].

Akten believes that the output from machine learning models is simply noise, shaped by a filter, which humans then project their own meaning onto, consciously or unconsciously, in order to give it structure and significance. In this sense, Akten views latent spaces as a form of Rorschach-style inkblot, where random inputs are filtered through a model to produce “structured randomness” that humans can then interpret and assign meaning to, based on their own experiences, beliefs, and knowledge. As we conclude our exploration of Akten’s artistic journey into the world of artificial intelligence and large language models, it is clear that his work challenges us to reconsider our relationship with technology and data processing systems. Through his Twitter bot, @WordOfMath and others, he highlights the inherent ambiguity and subjectivity of human interpretation, where we impose meaning onto seemingly random outputs generated by machine learning models. Akten’s critique of his own work serves as a touching reminder that these models are simply tools, shaped by our own biases and assumptions, rather than objective arbiters of truth. As we continue to navigate the complexities of AI-driven society, it is essential that we acknowledge and engage with these issues, encouraging a more nuanced understanding of the interplay between human creativity and artificial intelligence.

References

[1] Akten, M. (2016). @WordOfMath (2016). [2] Akten, M. (2016). AMI Residency Part 1: Exploring (word) space, projecting meaning onto noise, learnt vs human bias. [3] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. [4] Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global Vectors for Word Representation. [5] ami.withgoogle.com [6] twitter.com/wordofmath