Pandaemonium

THE NEED FOR AN INTELLIGENT DEBATE ABOUT ARTIFICIAL INTELLIGENCE

AI

This essay, on fears about AI, was the main part of my Observer column this week. (The column included also a shorter piece on the latest migration figures.) It was published in the Observer, 25 February 2018, under the headline ‘Worry less about the march of the robots, more about the techno panic’.


A robot cleaner infiltrates Germany’s ministry of finance by blending in with the pool of legitimate machines. After initially performing routine cleaning tasks, the robot, using facial recognition, identifies the finance minister. It approaches her and detonates a concealed bomb.

That’s one of the scenarios sketched out in a new report called The Malicious Use of Artificial Intelligence. Produced by 26 researchers from universities, thinktanks and campaigning organisations in Europe and the US, it is the latest in a series of studies warning of the dangers of AI.

‘The development of full artificial intelligence’, Stephen Hawking has claimed, ‘could spell the end of the human race.’ Elon Musk, the billionaire founder and chief executive of SpaceX and Tesla, has suggested that ‘With artificial intelligence we are summoning the demon’.

Such apocalyptic fears don’t lend themselves easily to rational debate. They amount to fictionalised accounts of a dystopian future that you either believe or you don’t, just as you either believe in the Four Horsemen of the Apocalypse as harbingers of the Last Judgment or you don’t.

The new report is different. It looks at technologies that are already available, or will be in the next five years, and identifies three kinds of threats: ‘Digital’ (sophisticated forms of phishing or hacking); ‘Physical’ (the repurposing of drones or robots for harmful ends); and ‘Political’ (new forms of surveillance or the use of fake videos to ‘manipulate public opinion on previously unimaginable scales’).

What we are faced with, this list suggests, is not an existential threat to humanity but sharper forms of the problems with which we are already grappling. AI should be seen not in terms of super-intelligent machines but as clever bits of software that, depending on the humans wielding them, can be used either for good or ill.

ven where AI can be clearly used for malicious ends, however, we need a more nuanced debate. Consider the case of ‘deepfake’ videos, created through software manipulation, about which there has been panic recently. Such software is currently used mainly in porn flicks, to stitch the head of a celebrity to the body of a porn star. It may not be long, though, before fake “political” videos are created. It is possible now, using such techniques, literally to put words into someone’s mouth.  How long, then, before we see a video of Barack Obama ‘revealing’ that he was born in Kenya, or Donald Trump ‘admitting’ to be a Russian spy?

There is, though, nothing new in the creation of fake images. Photoshop has been with us for 30 years. The ubiquity of manipulated images has created within the public both a greater ability to discern fakes and a more sceptical eye in viewing photos. Photoshopped images are used relatively infrequently to buttress fake news stories.

What drives fake news stories are not technological but social developments: the fragmentation of society, the erosion of a common public sphere, the willingness to accept any story that fits into one’s worldview, and to reject any that does not. What should concern us is not just the technology of fake videos but also the social reasons that people respond to fakery as they do.

We need a sense of perspective, too, when it comes to physical threats from AI. Consider the scenario of the cleaning bot repurposed as a bomb. For that scenario to make sense, the authorities must have been so lax that they failed to check for the presence of explosives in government buildings. If a bot can detonate itself in front of a minister, so can a human being. The problem exposed is not technology but security.

The idea of terrorists using robots or drones is plausible (though it is worth remembering that the drones that kill and maim today do so mainly in the name of the ‘war on terror’). And yet, the lesson of the past two decades is that while the authorities have panicked about terrorists acquiring high-tech capacity, such as ‘dirty bombs’, in reality terrorism has increasingly caused fear and disruption through more low-tech means, such as driving cars into crowds.

The danger in becoming too obsessed by the threat of AI is that we fail to see that many of the ‘solutions’ are equally problematic. We come to accept greater state surveillance as the price of keeping AI ‘safe’, or, as The Malicious Use of Artificial Intelligence suggests, agree that certain forms of AI research findings be restricted to a select few individuals and institutions.

Is that the kind of society we really want? Might this not also lead to a form of dystopia? That, too, is a debate we should be having.

 

.

The image is from SwissCognitive.

One comment

  1. My own fears of artificial intelligence come from unintended consequences, and from their role in the further concentration of power, rather than malice. I do not think that the algorithms driving Facebook were intended to enclose us more securely in our separate ideological bubbles, or to provide a pathway for the spread of bullshit, or to drain revenue from the traditional media.

    I am also concerned (perhaps someone more knowledgeable could reassure me?) that many-layered neural networks, whose operation no one really understands, could develop undetected pathologies that would wreak havoc when triggered. Asimov postulated laws of Robotics, to stop this kind of thing from happening, but that, I fear, was fantasy.

    Finally, more obvious and immediate concerns. The destruction of work by automation (part of a more general trend), and the disruption of communication by the limitations of AI programs controlling nodes. (A trivial example of this last one is difficulty in communicating by telephone or online with an organisation, if one’s reason for doing so does not fit neatly into any of the answering program’s categories.)

    As for proposed remedies, such as somehow screening online messages, these presumably would also need to be applied using AI, because of the sheer bulk of the task, and the related control issues would themselves exacerbate the very problems they were meant to cure.

    All of these are things we should indeed be talking about, urgently, now.

Comments are closed.

%d bloggers like this: