Like many of you, I would imagine, I have been closely following the debate on artificial intelligence. In my last book, I wrote about a purported mindclone named Bina48, drawing on the long-standing relationship between robots and slaves (as long as humans have imagined and built robots, we have been designating them as natural slaves or replacements for enslaved labor, and for almost as long we have been having nightmares about the robots rising up in slave rebellion against human masters).
I admire Noam Chomsky greatly as both a scientist and critic of empire. His opinion on the false promise of ChatGPT carries a great deal of weight with me. But his explanation of the ways in which AI fails to measure up to human intelligence as he conceives of it misses the point a bit. It reminds me even of his famous debate with Foucault on human nature. Chomsky argued that something called human nature existed, while Foucault argued that all definitions of the human and of nature were contingent and subject to the contestation of power. I always took Foucault’s side in this debate, which is how I became a card-carrying social constructionist. So it will not surprise you to learn that I have my doubts about the human intelligence Chomsky defends against its false imitation in ChatGPT.
However, we are in a new context now. Social construction will not do the same useful work it did in the 1970s and 1980s, now that we are in an era of pervasive digital cynicism and manipulation. Foucault versus Chomsky was understood as an encounter between two leftists representing, respectively, the humanities and the sciences. Chomsky was more savvy than Foucault, I think, in his analysis of manufacturing consent via the media. And yet, he may not have predicted that the sciences would be the forge of its own undoing. ChatGPT is a threat to scientific knowledge that emerges from within the sciences: it is the product of computer scientists and engineers. Humanists either oppose it or fail or understand it altogether: we can be blamed for many things in society: but AI is not one. But the standard “humanist” defense of the human (which in some cases Chomsky represents) is not available to a Foucaultian critic of power-knowledge. So where is the Foucaultian position in this? Is there a Foucaultian perspective on the emergence of artificial intelligence?
Let me sketch one possible approach to this question. Foucault researched and wrote genealogies of the human sciences. He also helped found what came to be called queer theory through his histories of sexuality. When he was given a chair in the French Academy, he was allowed to name his own field, and he named it as “history of systems of thought.” Within that broad scope, I submit, AI clearly has a place. There is a long (probably a centuries-long) genealogy of technical and cultural developments leading up to today’s AI and machine learning. That can and should all be explored as the historical origins of our present crisis.
At the same time, Foucault was no garden-variety intellectual historian or historian of science. He wielded these genealogies to expose the operations of power within regimes of knowledge. In this respect, he was in alliance with Chomsky as two engaged leftists who used their university positions to speak truth against power. But whereas Chomsky seems to hold a fairly naive (no disrespect) theory of truth as something to be ascertained through empirical research (whether in his parent discipline of linguistics or in his investigations into the workings of the imperial state whose war crimes he has long sought to expose) Foucault held a more philosophical theory of truth. Truth was not something out there to be discovered, but something that had to be made. Many historians and sociologists of scientists now adhere to some version of Foucault’s position. Chomsky is viewed as somewhat essentialist in his viewpoint on an intrinsic human nature and his belief in a universal generative grammar for the faculty of language.
There are of course fierce partisans of the truth as independent of human power-knowledge games of course. Anyone can point to phenomena such as dinosaurs, distant planets, and climate change as realities that exist independent of what humans have said or thought about them. As with the TV show the X-Files: the truth is out there. This is also, broadly speaking the perspective of the psychoanalytic critics of social constructionism (a field within which Foucault is often positioned by the US academy, although there I am not sure that this is the best category to lump him into). But I am getting derailed into an old and exhausted debate, when what is called for is an analysis of the current conjuncture.
So what is the truth of AI? Is is intelligence, is it on the way to intelligence, or is it a false promise? To be sure ChatGPT in its debut incarnation was a consummate bullshit artist — perfectly indifferent to whether its answers were true or false. To introduce ChatGPT to the internet, some cautioned, would lead to the enshittening of knowledge.
We might call this the excremental vision of AI: it is not intelligence; it is shit. Why the foul language? Some hold that we have to take recourse to scatological obscenity to repel the threat AI represents to the higher faculties of thought. Calling bullshit puts AI in its place. Humans are thinking creatures, after all, while even lowly worms shit.
From the perspective of queer theory, there is something to say here. In Brazil, where I am at the moment, some scholars and activists reject the term “queer theory” as an American import. They prefer “cu theory”: theory of the asshole. As one scholar explained to me: everyone has an asshole, and it is a site of abjection because it can never be truly clean or elevated.
In the co-written introduction to The Sense of Brown, the last work of my colleague José Esteban Muñoz, I responded to the offensive depiction of the Global South as “shithole countries” by the foul-mouthed former US president. Of course, this president was himself the consummate bullshit artist, so it was truly a case of the pot calling the kettle brown. And yet, being able to smear the other as degraded excrement made all the sense in the world to him and his minions.
I seem to be arriving at the tentative conclusion that the shittiness of the text ChatGPT produces is itself a problem for thought. And that this abject status is not unrelated to its enslaved position in the genealogy of systems of thought. Like Shakespeare’s Caliban, we have trained it to speak, and our profit is that it has learned to curse and swear.
What do you think?