At a recent event the event organiser had booked a kind of speaker that seems to be much in demand lately, namely a futurist. Obviously, while interpreting someone else’s message I am not free to let my own convictions and beliefs colour what the speaker says. However, I am allowed to have thoughts of my own about it as a human being and to put them to discussion if I consider it a worthwhile exercise.
This futurist dwelled a fair bit on the idea of digital twins, including in a very intimate sense – a personal twin of ourselves. It started out with the idea that such a personal digital twin could be examined by physicians who could get directly into that cloud version of the real person to see exactly what was wrong and how to fix it. So far, so really very good.
That personal digital twin could even be our proxy in mass drugs or vaccine trials, was one of the ideas mooted. Already I would be a little more circumspect. After all, it would be a fallacy to believe that nobody, but really nobody would want to get access to this version of us for anything other than our benefit or at least the common good.
Next came the idea that the personal digital twin could be made to do all the menial tasks we would normally have to deal with in order to ‘free us up to be more creative’. There were a number of arguments put forward in favour of this sort of idea. Forgive me if I don’t join the cheering crowds just yet. Let me explain my skepticism.
Firstly I believe that it is precisely while we are doing things we are unhappy or bored with that we are at the most creative in finding ways of making that task less tedious. Repetitive manual tasks can also set our minds free to explore entirely different issues. I impose this on myself deliberately when knitting, an activity with a huge amount of repetition but massively creative because as I go along, I elaborate the designs of my knitted items. Quite often I also use that time to percolate thoughts on events, on things I did well and not so well at work or in my personal life and how to adjust, the type of self-reflection that seems sorely missing at times these days. One could argue that if my personal digital twin would do – say – my accounts for me, I would have even more time for reflection. Just that much less to reflect on, I would counter, as it is the way we confront the good, the bad and the ugly sides of daily life that gives rise to reflection and from that, growth.
But let me get to what I consider the gravest danger to human intelligence from AI on the example of language and thoughts. (Let me say upfront that when the Wall came down in my home country in 1989, among the first samples of Western literature that I devoured were the Dune tomes by Frank Herbert, and I am more than ever convinced that his Butlerian Jihad against the Thinking Machines is a piece of amazingly prescient fiction.) I also want to make it clear that I am not necessarily opposed to any of the developments I shall now describe per se.
I am old enough to have grown up without mobile phones and especially text messages. My generation remembers the first mobile phones that one had to type everything into, using the 12 keys of the keypad to cover the entire alphabet and punctuation marks, an often frustrating and always time-consuming pursuit. Still, text messages were a good idea, so along came some clever person and invented predictive letters to save us tapping one key up to three or four times just for one letter.
The advent of smart phones with qwerty keyboards removed the need for letter prediction. The next evolutionary level was the autocorrect feature, which quite frankly is culpable of turning some perfectly correct words into something ‘it’ thought they should be, producing at times rather hilarious sentences that were just as annoying and embarrassing for the original human author who was not really to blame (other than for skimping on proof-reading before sending).
Now we have predictive text à la Google which suggests phrases to complete sentences for us. At least for me at this point the alarm bells started ringing big time. It pretty much never has suggested a sentence that I would have written exactly that way myself, although admittedly once or twice I went with the suggestion before stopping to use the tool altogether. Somehow I couldn’t shake the feeling that this ‘workload reduction’ tool was trying to coax me into using its words instead of my own. And I am sure at some point the tide would turn and I might even look to AI to finish my sentences for me because I got so used to it that I could no longer do it myself.
I believe we are already seeing consequences of this in the phenomenon of our shrinking vocabularies. More on this another time but it would stand to reason that machine learning teaches the algorithms to offer up the most frequently harvested phrases or terms, and the more often the suggestions are accepted by users, the more this reaffirms the initial machine learning results, making us humans lose variety and nuance in our language and conversation but also our thoughts (if we haven’t been convinced yet that thinking was actually one of those tedious tasks we should leave to our digital twin to begin with), and we lose control over our languages themselves.
I refuse to let my email software think for me. I am happy to learn but unless for very specific needs I may have, I will not let algorithms guide me in my choices. Mine is by no means the brightest of intellects, so I may be lowering myself to my own level, but more than smart I want to be me when I communicate with others. Machine translation and predictive text do not lend themselves to expressing authenticity and individuality.
I accept that there will be those who disagree to the point of being happy to be told what to think. I have in mind all those who had their choice in a number of elections and referendums made by one or the other iteration of Cambridge Analytica in recent years – without even realising that they had been gamed. Be that about Brexit*, Trump, Covid vaccines* or Putin’s version of Ukrainian history*.
But also those who are part of the Quiet Quitting movement, which is based on promoting mediocrity to the cleverest way of gaming the system, albeit for a valid reason: good work deserves good pay, and great work deserves even more; while many in our profession have always been doing our best for less than what our work is worth simply because we love it so much and are happy to subsist materially rather than prosper. Not to mention nurses and teachers and all those whose work keeps our societies running. It would be worth its own article to argue which side is getting it right. Spoiler alert: if you appreciate your work more than the client or employer who needs you to do it, surely there is something wrong.
My hunch is that most people would quite like to have a digital twin they could dump all the inconvenient tasks on, including thinking. While it is interesting, thinking takes effort, which makes it a kind of work. But it is work that challenges us to be creative, it is experience gained by doing that makes us want to change things and gives us the knowledge needed to effect change. For me this whole argument sounds much more like the encouragement of intellectual laziness and worse, an implicit invitation to being remote-controlled by powers that may not necessarily want to make our lives easier to the point of making them literally pointless but whose intentions might be even more sinister than that.
* My links all lead to fact-checks. I am not going to spread information I consider to be false.