I am fascinated by the prospect of artificial intelligence and the questions surrounding it, so I was pleased to see the BBC broadcast a Horizon documentary on the subject recently. Unfortunately, while I was expecting a steak, they served up a soufflé.
AI is a subject that’s been playing on my mind, to the extent that I wrote a script for a radio drama, which touched on some of the ethical issues it raises, in what I considered a gripping, thought-provoking romp with a few belly laughs along the way. Sadly, the Beeb was not as impressed as I had hoped, so you won’t be hearing it on your wireless any time soon.
Horizon featured the ‘Watson’ computer that was entered into the US ‘Jeopardy’ TV game-show. It was testimony to the skills of its programmers that it wiped the floor with its two human opponents, but it seemed to have more in common with its number-crunching ancestors than with artificial intelligence.
An approach more likely to result in a genuinely thinking machine is to create something with the mind of a baby, with curiosity and the capacity to learn. It occurs to me that this need only happen once as the resultant developed mind could simply be duplicated ad infinitum. Could it be regarded as truly intelligent? To my mind, this would only be the case if it became self-aware.
Could an artificial intelligence be considered alive? That’s a high bar to jump. If so, it would be only the second time that life had been created from nothing, either on this planet, or any other that we are aware of. Every thing that has ever lived is ultimately descended from that very first cell. It will be a more clever man than me (or maybe a machine!) that defines what comprises ‘life’ in an artificial form. Synthetic biology has also blurred the boundaries, now that it is possible to order strands of DNA online and plug them together to build new structures. We also need to decide whether to grant ‘artificial life’ the same rights and responsibilities as the organic variety.
We must choose whether we want to create an intelligence that mimics our own, or would that simply inhibit it from achieving its full potential? Would something that was creative, intuitive and emotional be riddled with the same weaknesses and inconsistencies as human beings? Do we really want to rely on something that is potentially temperamental, stubborn or principled, when all we want is an obedient servant? You wonder if such an entity would learn to deceive us. Could it become depressed or suicidal?
If we develop an AI whose intelligence outstrips our own, would it be wise to let it design its successors and govern our world, or would that leave humankind too vulnerable and dependent? Whoever controls it will exercise unprecedented powers, to be envied by any Bond villain.
It seems sensible that any work towards creating artificial intelligence should be conducted in isolation, in much the same way as research into viruses, for fear of it escaping into the big, wide, world. Imagine the consequences if sentient code found its way online. For the same reason, we should insist that any AI project includes a ‘kill switch’, to kill it automatically if it gets out of control or escapes.
While writing this post, I read a report that scientists have created artificial genetic material that can evolve like DNA, so there is some urgency to these questions.
Despite all the precautions to be taken and decisions to be made, the prize is beyond our comprehension. I suspect that only artificial intelligence offers any prospect of resolving matters of global economic, health and environmental significance that have defeated us mortals thus far and are otherwise likely to do so for the foreseeable future. Whoever owns, or controls, such a thing would have the world at their feet.