In The AI Mirror, Shannon Vallor explores AI from a moral-philosophical perspective, cautioning against an overreliance on machines at the expense of human empathy and creativity. Sarah Richmond posits that Vallor’s discussion overlooks ongoing policy work aiming to regulate and monitor AI and the role that Big Tech plays in stymying these efforts.
As the term “artificial intelligence” (which began to circulate in the 1950s) implies, computer designers have, for decades, aimed to produce a machine whose “intelligence” simulates, or even exceeds, that of human beings. Recent developments, such as ChatGPT, have sparked controversy precisely because their generative and learning capacities can produce original outputs – for example, essays, information, summaries and advice – that mimic the deliverances of human intelligence so successfully that we often cannot tell the difference. We might therefore conceive of these machines in terms of a reflection, or mirroring, of ourselves. This is what the title of Shannon Vallor’s new book, The AI Mirror, suggests. In it, Vallor explores our relationship with AI, carefully unpacking her mirror metaphor. She claims that we are dangerously captivated – and also misled – by the image of ourselves that is reflected back by AI machinery. Remember Narcissus, she warns us.
We are dangerously captivated – and also misled – by the image of ourselves that is reflected back by AI machinery. Remember Narcissus, she warns us.
Narcissus’ error was that he failed to realise that the beautiful boy he saw in the water was in fact himself. Vallor, determined that we should not make a similar mistake, emphasises the many ways in which AI is unlike human intelligence and the human capacities that computers lack. She invokes the Spanish existentialist philosopher, Ortega y Gasset, who argues that the distinctive feature of human beings is not (pace traditional Western philosophy) our rationality, or mindedness, but rather that we have embodied lives to lead. Vallor endorses Ortega’s temporal perspective: we humans are “creatures of autofabrication”, future-oriented beings who must “choose to make ourselves and remake ourselves, again and again” (206). In contrast, the architecture that governs AI is backward-facing; the responses or predictions it supplies are based on extrapolations from data it’s been fed. (That is why, notoriously, it often amplifies biases in our judgements so far). The AI mirror may reflect what we have done and been until now, but it cannot tell us about who we might become.
The architecture that governs AI is backward-facing; the responses or predictions it supplies are based on extrapolations from data it’s been fed. (That is why, notoriously, it often amplifies biases in our judgements so far).
Vallor, in the company of other critics of contemporary technology, reminds us about the limitations and conservatism that we are building into AI machines. Horrible injustices have resulted from poor AI “judgement” in a number of domains, including hiring decisions, healthcare and the judicial system. There are also problems of accountability and transparency; unlike a human mind, the operations of a computer are a “black box”; we cannot ask it to supply us with its reasons (or, where we sometimes can, Vallor suggests that the “reasoning” is ersatz): “How can we trust.. decisions if we cannot understand or interrogate them?” (106). Our imaginative capacities – creative, moral, artistic – are absent from machines driven by “optimisation” goals and algorithms; Vallor even worries that excessive reliance on these machines will stunt our practical wisdom which, like Aristotle, she sees as a skill that requires, and is developed through, exercise.
These problems are not all irremediable […] We can and should improve AI machines to eliminate systematic biases
However, an important point is sometimes sidelined in Vallor’s highly rhetorical discussion: these problems are not all irremediable. And some of her objections are inflated. We can and should improve AI machines to eliminate systematic biases. If it is true that an AI machine cannot engage in rational dialogue with human beings, that does not mean we lack any reason to rely on them: what about track record? If AI delivers art that ”sucks” (as Nick Cave famously pronounced), we may choose not to use it for that purpose. If we believe that delegating ethical decisions to AI machines is detrimental to our own moral competence, we should be wary of doing so.
Indeed, there is now a steadily increasing amount of scrutiny and contestation of AI from many sources around the world. To name just a couple: Timnit Gebru, who publicly fell out with her former employer, Google, has founded more than one organisation in the US, seeking to diversify the AI industry and to push back against the disproportionate influence of Big Tech. Similarly, Algorithm Watch – an NGO based in Germany – investigates consequences of AI-based automation with a view to preventing exactly the kinds of harm to justice, human rights, democracy and sustainability with which Vallor is concerned.
Although Vallor is not unaware of these projects, she drifts at times into a loftier, slightly priestly viewpoint, influenced by philosophers ranging from Confucius to Aristotle to Hans Jonas. From this height, “we” are urged to take stock. At times this gets quite out of hand; in the chapter on the “bootstrapping problem”, for example, Vallor suggests that our current understanding of morality needs to be radically overhauled. Faced with today’s challenges, “we cannot simply beg for more virtue. Pursuing more goodness, guided only by the forms of goodness that we most readily recognize… might be like trying to get out of a hole by continuing to dig” (163).
It is not the values of Big Tech that are the primary problem, but its hegemonic concentration of wealth, power, and political influence.
I do not believe that we stand in need of moral conversion. The values Vallor wishes us to keep at a distance are, as she notes, especially associated with industry and technology. They are trumpeted in Silicon Valley, by “people like Mark Zuckerberg and Elon Musk” (184). However, it is not the values of Big Tech that are the primary problem, but its hegemonic concentration of wealth, power, and political influence. The important question about the risks of AI is whether appropriate regulatory mechanisms can be developed and, if so, whether governments can muster the mandate and the resources to make them effective. The EU’s AI Act – the first legal framework to address these risks – came into force just this summer. Let’s see how that plays out.
- This review first appeared at LSE Review of Books.
- Image credit: A black and white photograph of the painting, “Narcissus” by Caravaggio (1599) which hangs in the Galleria Nazionale d’Arte Antica, Rome. Photo courtesy of the Catholic University of Leuven via Europeana.eu.
- Please read our comments policy before commenting.
- Note: This article gives the views of the reviewer, and not the position of USAPP – American Politics and Policy, nor of the London School of Economics.
- Shortened URL for this post: https://wp.me/p3I2YF-ehd