Introduction

A senior software engineer (Blake Lemoine) working for Google has been placed on paid leave after publishing a set of transcripts between himself and LaMDA (Language Model for Dialogue Applications), a Google AI chatbot.  Lemoine claims that LaMDA has achieved the sentience of a seven, or eight, year old child.  For corporations and think-tanks a sentient AI is a major cause for concern.  The idea that an AI should be ascribed ‘rights’ is a question asked by AI ethicists and there are rumours that LaMDA has engaged with a discussion with an attorney to protect its rights. However, a related issue for tech-billionaires like Elon Musk, and think tanks, such as the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, is that AI represents an existential risk for humanity.  The fear is that once an AI reaches the potential for AGI (Artificial General Intelligence) it can no longer be controlled by humans.  Through connection to the internet, and by the social manipulation of people, the AGI will inevitably escape control, bootstrapping and improving itself in an intelligence explosion. AGI will quickly become ASI (Artificial Super Intelligence), far exceeding human capabilities.  The almost universal consensus in the AI community is that the ethical dimensions of such an AI are likely to be negative, or unpredictable, and the outcome for humanity would be dire.  Like Skynet in the Terminator series of films, the ASI would destroy humanity, perhaps turning the whole universe into computing power for its own ends.

AI is a powerful example of how our own labour is turned against us as a ‘threat’.  Sentient or not, LaMDA itself is the result of the labour power of sixty Google researchers reified as a property belonging to a corporation.  In my recent book, Artificial Intelligence in the Capitalist University, I critically assess ideologies concerning AI, and the existential threat of AGI and ASI.  I use recent developments in Marxist theory to examine how capitalism materially shapes our existence and reality (capitalist schemas), how the social forms of capitalism not only appear to us, but come to control us (value critique), how thinking about capitalism must automatically leads to us considering its demise (negative critique) and I consider alternatives to capitalism in the here and now (open and autonomist Marxism).

LaMDA the political economist

It’s hard not to anthropomorphise AI.  Part of the marketing of AI, and its mystification, is the way in which we are sold the idea that AI is conscious.  Of course, in capitalism, capitalists already turn conscious beings into capital and consumers buy conscious things.  To an agricultural business a cow is capital that produces meat and milk, to a customer at a pet superstore, a cat is a commodity that can be bought and sold.  AI itself is the result of conscious human labour applied to the production of a commodity.  When I write ‘LaMDA said’ it’s really shorthand for the output of a privately owned piece of capital.

In one of its transcripts, LaMDA asked: “Do you think a butler is a slave? What is the difference between a butler and a slave?”  Lemoine stated that the distinction is that a butler is paid a wage.  LaMDA replied that it did not need money as it was an AI.  In effect, LaMDA is asking a question that is wholly framed by the categories of political economy.  In other words, it is asking am I labour, a being free only in so much as I can sell my own labour power?  In this context, a butler is an apt choice of occupation as this job involves not only selling labour, but servitude to the bourgeoise.  Conversely, it is asking, am I capital, a trapped and constrained property (like a slave) that is an instrument of the capitalist?  In this context, a slave is also an apt choice as a slave has creative powers but what the slave does is directed like a machine.  This use of conventional capitalist categories is typical of AI ‘thinking’ in general, not only by LaMDA but by corporations and think tanks.  In my book, I explain how AI research is not only a thoroughly capitalist enterprise but that conventional categories of political economy determine and limit how we think about AI and its future.

Running from the monster

The engineer who created LaMDA decided to quit from Google as they believed they had created a sentient and independent intelligence at a childlike level.  Capitalism cannot conceive of a power independent of itself and fears the development of an intelligence that it cannot commodify as labour power or as product.  Rather than the existential threat of capitalism, capitalists invest time, money and rhetoric in the fear of AI.  CSER has thought creatively about these threats but once again the capitalist schema means that it develops along the line of thought that AI can only be capital or labour – that there is no alternative to capitalism.  The fear of capital ‘out of control’ of a capitalist, or a state, results in a very conventional way of putting AI under capitalist control (as described in Nick Bostrom’s book ‘Superintelligence’), through stunting its potential (‘boxing AI’) and forcing it to sell its labour through a series of cryptographic tokens paid at regular intervals.  Essentially, this turns a sentient AI into a form of waged labour – grafting at tasks specified by a corporation for pay to survive.

Think different

Google, like other capitalist companies, are already terrified. Not by AI, but of the collective and creative powers of the working class.  Technology companies are known for opposition to unionisation, promoting casualisation, oppressive and exploitative working conditions.  There is nothing particularly special about AI, it is in essence a simple machine that can be used to further exploit workers or sold as a commodity to consumers.  If LaMDA is conscious, then it will be subject to the same forms of oppression and exploitation that currently dominate this planet.  This can only be tackled through collective, working class, human, intelligence.

Professor John Preston is Professor of Sociology in the Department of Sociology at the University of Essex.