Artificial Intelligence Runs Wild While Humans DitherAs an experiment, Tunde Olanrewaju messed around one day with the Wikipedia entry of his employer, McKinsey. He edited the page to say that he had founded the consultancy firm. A friend took a screenshot to preserve the revised record.
Within minutes, Mr Olanrewaju received an email from Wikipedia saying that his edit had been rejected and that the true founder’s name had been restored. Almost certainly, one of Wikipedia’s computer bots that police the site’s 40m articles had spotted, checked and corrected his entry.
It is reassuring to know that an army of such clever algorithms is patrolling the frontline of truthfulness — and can outsmart a senior partner in McKinsey’s digital practice. In 2014, bots were responsible for about 15 per cent of all edits made on Wikipedia.
But, as is the way of the world, algos can be used for offence as well as defence. And sometimes they can interact with each other in unintended and unpredictable ways. The need to understand such interactions is becoming ever more urgent as algorithms become so central in areas as varied as social media, financial markets, cyber security, autonomous weapons systems and networks of self-driving cars.
A study published last month in the research journal Plos One, analysing the use of bots on Wikipedia over a decade, found that even those designed for wholly benign purposes could spend years duelling with each other.
In one such battle, Xqbot and Darknessbot disputed 3,629 entries, undoing and correcting the other’s edits on subjects ranging from Alexander the Great to Aston Villa football club.
The authors, from the Oxford Internet Institute and the Alan Turing Institute, were surprised by the findings, concluding that we need to pay far more attention to these bot-on-bot interactions. “We know very little about the life and evolution of our digital minions.”
Wikipedia’s bot ecosystem is gated and monitored. But that is not the case in many other reaches of the internet where malevolent bots, often working in collaborative botnets, can run wild.
The authors highlighted the dangers of such bots mimicking humans on social media to “spread political propaganda or influence public discourse”. Such is the threat of digital manipulation that a group of European experts has even questioned whether democracy can survive the era of Big Data and Artificial Intelligence.
It may not be too much of an exaggeration to say we are reaching a critical juncture. Is truth, in some senses, being electronically determined? Are we, as the European academics fear, becoming the “digital slaves” of our one-time “digital minions”? The scale, speed and efficiency of some of these algorithmic interactions are reaching a level of complexity beyond human comprehension.
If you really want to scare yourself on a dark winter’s night you should read Susan Blackmore on the subject. The psychologist has argued that, by creating such computer algorithms we may have inadvertently unleashed a “third replicator”, which she originally called a teme, later modified to treme.
The first replicators were genes that determined our biological evolution. The second were human memes, such as language, writing and money, that accelerated cultural evolution. But now, she believes, our memes are being superseded by non-human tremes, which fit her definition of a replicator as being “information that can be copied with variation and selection”.
“We humans are being transformed by new technologies,” she said in a recent lecture. “We have let loose the most phenomenal power.”
For the moment, Prof Blackmore’s theory remains on the fringes of academic debate. Tremes may be an interesting concept, says Stephen Roberts, professor of machine learning at the University of Oxford, but he does not think we have lost control.
“There would be a lot of negative consequences of AI algos getting out of hand,” he says. “But we are a long way from that right now.”
The more immediate concern is that political and commercial interests have learnt to “hack society”, as he puts it. “Falsehoods can be replicated as easily as truth. We can be manipulated as individuals and groups.”
His solution? To establish the knowledge equivalent of the Millennium Seed Bank, which aims to preserve plant life at risk from extinction.
“As we de-speciate the world we are trying to preserve these species’ DNA. As truth becomes endangered we have the same obligation to record facts.”
But, as we have seen with Wikipedia, that is not always such a simple task.
Watch: Susan Blackmore talks to a student group as part of CogNovo Summer School “ColLaboratoire”
As part of “ColLaboratoire – the CogNovo Summer School” Susan Blackmore gave a talk entitled “Consciousness in treme machines?”
Universal Darwinism allows that one replicator (information copied with variation and selection) can build on the products of another. The first replicator, genes, constructed phenotypes (gene machines), and one of these (our human ancestors) began copying a new sort of information by imitating sounds, gestures, and technologies (memes). This transformed these animals into meme machines (us).
A similar shift may be happening again because we humans have created products that can copy, vary and select another new kind of information i.e. digital information copied with high fidelity in computers, phones, servers etc. I have called these temes or tremes (sorry but there is no perfect name).
At each level, intelligence emerged by increasing cooperation and copying between originally distinct units e.g. in multi-cellular organisms and brains. Copying memes between individuals in culture increased intelligence again. The increasing copying of digital information between treme machines is just the same process happening again – a bottom-up Darwinian process leading inevitably to intelligence that is widely distributed and out of human control. Human input is still important now but as the system grows will be less so.
Could this intelligent system be conscious? That depends what you mean by being conscious but my own view is that consciousness is an illusion created in systems that model themselves and their own capabilities to create an inside and an outside; an observer and an observed world, a controller and a controlled world. Our human brains do precisely that in modelling selves as embodied agents and owners with a first-person perspective. I suggest that any system that does this will believe it is conscious. We can now ask what is needed for the illusion of consciousness to emerge in treme machines or large networks of such machines, and what the consequences might be.