Book Review | ChatGPT lies, but the AI genie is out of the bottle
While investigating cookies, small amounts of data left on browsers by websites visited by users, Ms Murgia discovered a profile of herself in astonishing detail — intrusive, and, because it was valuable to anyone who wanted to sell her anything, exploitative. AI takes that intrusion and the exploitation to the next level. Here is a set of stories that tells how, under the guise of doing good, AI is used as a tool in ways that are extremely intrusive, exploitative and deeply unethical.
AI systems need mountains of data. Face recognition systems, for instance, use millions of faces to understand the principal features by which they can recognise someone. The more the faces in the database, the more accurate the system. So how does all that data get into the system? Ms Murgia starts with stories of people who make a living out of looking at pictures and videos, labelling parts of them as people or cafes or whatever the system requires to know. This includes looking at pornography, and telling the system what’s (illegal) child porn and what’s adult (legal) porn, so that social media sites know what to host and what not to. And each time the legal definition changes, this process has to be repeated.
The difficulty is not the training itself but how exploitative it is. It’s done by large numbers of people in low-wage countries — bits of Africa and South Asia, for instance — at minimal wages, under stressful conditions, after signing non-disclosure agreements, and with little or no medical coverage for work, such as having to watch child porn, that can cause serious mental damage.
Ms Murgia explores, through stories, the ways in which AI systems intrude and exploit. Besides the training, she covers seven major areas. There’s the body, covering deepfakes and the misuse of knowledge of your body that’s available to exploiters who can plant your face on a porn video, and, with no legal consequences, put it on display on the net.
Then there’s your identity — how AI systems identify people from, say, a still from a video of a person captured on a surveillance camera. Here’s something about facial recognition systems built by IBM, Microsoft, and Face++ — “error rates for light-skinned men hovered at less than one per cent, while that figure touched 35 per cent for darker-skinned women”. It’s a combination of false positive recognition by AI systems and bias in humans — prejudice against coloured people in the US, for instance — that makes the use of AI systems dangerous.
There’s health, across societies, but mostly in poorer areas such as the tribal areas of rural India. This is where AI can make the biggest difference, aiding in diagnosis, delivery, and follow up of diseases such as TB, which require early identification and consistent treatment and follow up to cure and eventually eliminate. The risk — that its availability will discourage coming generations of physicians from engaging deeply with patients.
Your freedoms are at stake, too — Murgia tells of a case in the Netherlands, for instance, of teenage brothers on a list of youngsters to be surveilled, leading to visits from the police, arrests, questioning, and extreme disruption of life, often for no reason whatsoever. China offers horror stories of people being denied various options because their face is on a system that labels them anti-government — what’s been done to Uyghurs, for instance, could be taken as an example of an AI-based governance system gone wild.
There’s also the “safety net” — providing for the poor, a task of governance. Some of it is handed over to AI, for instance the identification of people who need assistance, such as teenage single mothers. A 2019 UN report, says, for instance, that digital tech, including AI, that’s used to find out who deserves aid, is used to “predict, identify, surveil, detect, target and punish” the poor! The report also brings up the threat of data colonialism in the form of private corporations influencing governments across the world, such as Meta offering free Internet with usage restricted to sites the provider approves...
Poorly designed or misused AI-based systems can resist change. Food delivery agents, for instance, find that it’s next to impossible to, say, delete a restaurant from a system, and, worse, to find a human to complain to! So delivery agents find themselves cancelling orders placed on a restaurant that’s gone out of business!
Generative systems such as ChatGPT are powerful predictive engines that can fabricate sentences that can contain lies. Yes, ChatGPT lies. There might be no motive here — it’s simply how the system works.
AI is an enormously powerful tool that can transform society, but in the wrong hands, it could destroy society. The question is, what can we do to make sure it stays mostly in the right hands? Ms Murgia asks 10 insightful questions, on subjects ranging from regulated wages for AI data workers, letting consumers know what products are AI-generated or aided, to ethics in AI and “channels for citizen empowerment in the face of AI”. Responses to these should offer a sort of roadmap to controlling AI.
One issue I had was with regard to how the book is organised. There are areas of overlap, stories that fit into multiple categories. So, could the book have been organised differently to present these findings better?
Excerpt: During our conversation, I asked her how online abuse should be treated in the eyes of the law. Immediately, Carrie rejected my framing, refusing to be drawn into the false dichotomy of online versus offline. ‘In my opinion, all crimes are happening offline, because everyone they are happening to is a human being, and not a computer,’ she said. ‘The online component is just the weapon that’s being used, not the crime itself. Anytime someone talks to me about their cyberstalking or online harassment, I say no, it’s just harassment, it’s just stalking.’