Machinic Neoplatonism from the Perspective of Large Language Models
In the article “Data Science as Machinic Neoplatonism”, which was published in 2017, Dr. Dan McQuillan explores how modern technology and digital culture reflect ancient philosophical ideas from Neoplatonism. It discusses how algorithms, artificial intelligence, and decentralized systems are changing our understanding of reality, identity, and social relationships in ways that echo these ancient philosophical concepts.
Considering the advancements on artificial intelligence research after the article was published in 2017 (more specifically large language models which can be seen as a branch of machine learning that is strongly tied with knowledge management and knowledge production much more than the others) we see that the main statements of the article is only getting stronger. The statement “The action may simply be the re-ordering of updates in your Facebook feed, and the consequences a minor change in your mood.” turns into the fact that any knowledge production including scientific literature is prone to be influenced by modern artificial intelligence models.
Another point to add to text from the perspective of modern LLM’s is the supply and demand of the knowledge in text format and the contradiction of the existing written literature to the idea of feminist and post-colonial critiques of technoscience. Considering the fact that Large Language Models can only be trained using a large corpus, another question is how to diversify it to include alternative sources rather than the scientific elements that are western-oriented, patriarchal or colonial influenced.
As we reflect on Dan McQuillan’s insightful article “Data Science as Machinic Neoplatonism,” it becomes clear that his observations about the intersection of technology and philosophical ideas have only gained relevance in the years since its publication. The rise of large language models (LLMs) has further solidified the notion that our understanding of reality, identity, and social relationships is increasingly shaped by algorithmic forces. Moreover, the influence of LLMs on knowledge production, including scientific literature, underscores the need for critical examination of these systems. As we continue to grapple with the implications of technoscience, it becomes essential to consider the supply and demand of knowledge in text format, as well as the existing power structures that shape our understanding of the world. By acknowledging the limitations and biases inherent in LLMs and actively working towards diversifying their training corpora, we can strive for a more inclusive and equitable digital landscape. Ultimately, McQuillan’s work serves as a reminder of the importance of intersectional thinking and the need to critically engage with the philosophical underpinnings of our increasingly technological world._