Is “I Robot” Really the Shape of the Future?
A recent chance conversation on the direction of the defence world in 2019 and over the next year or so prompted some quiet reflection here in the MONCh editorial offices. Having opined that “quantum computing, big data analytics, artificial intelligence (AI), network connectivity and China are the biggest issues to watch this year,” it has quickly become apparent that, in at least one of these areas, our assertions are being borne out by activity on the part of officialdom.
Later this week interested parties will be responding to a request for information, posted by the US Intelligence Advanced Projects Activity (IARPA), focused on research efforts into “cutting-edge machine learning techniques.” The ability of future computing systems (quantum, anybody?) to process and analyse vast amounts of data in real- or near-real-time is key to their application in the multiple (and somewhat arcane) taskings of the intelligence community – from interception to encryption and from facial recognition processing to determining ‘pattern of life’ behaviour in specific communities. The ability of such assets to ‘self-learn’ could have mammoth implications for the improvement of governmental cybersecurity.
One of the fundamental issues has always been trust – the extent to which we have faith in the ability of machines to make decisions that do not take human preferences or capabilities into account. This is an area of considerable differences in national approach, as a member of the British intelligence community recently explained to MONCh. “Culturally, we remain deeply suspicious of machine decision-making: our American friends have come, in certain areas, to depend on it to an extent that makes us uncomfortable.”
That is all well and good. However, there is a pragmatic element to the US approach which cannot be denied – and cannot easily be dismissed as a ‘cultural’ aberration. “We’re in the business of delivering optimal solutions to decision-makers: the sheer volume and complexity of data we have to handle in order to do that is now beginning to exceed the capacity of the existing systems – even the most ‘cutting-edge’ ones. Something is going to break – and when it does, it could be potentially catastrophic,” a senior industry executive told MONCh at AUSA in Washington late last year.
Another issue – allied in some ways to trust – revolves around the high false alarm rates that have been in evidence as breadboard AI solutions have been tested and evaluated. Rigorous examination of solutions on offer is absolutely the right thing to do – but let us not (emphatically not) allow errors and suboptimal aspects of the development process divert us from finding effective solutions. Soon. Failure to do so could change our way of life – not necessarily for the better.
It is almost impossible to determine with any degree of accuracy or granularity just how much effort China is pouring into cyber research and the development of both defensive and offensive electronic/cyber warfare capabilities. Suffice it to say that every China-watcher consulted by MONCh over the last year agrees – it runs into billions of dollars. Iranian expertise in the area is raising the spectre of widespread and potentially indiscriminate cyber warfare on a scale that has hitherto been written about only in the annals of science fiction. The Israeli government is being warned that unless it invests heavily in AI as a solution to its security challenges, it risks being left behind and outclassed by allies and enemies alike. A resurgent Russia is devoting unconscionable degrees of effort to perfecting effective electronic attack of potential military, governmental and civilian targets.
AI is not a silver bullet – not a panacea. It is, however, essential and is one of the very few avenues open to us to seek protection from the threats that are becoming all too real. And that needs to be balanced against the contentions, legitimate though they may be – that the ‘potential threats’ of robots with positronic brains running amok and exterminating humanity should make it impossible for us to pursue AI any further – are simply not practicable if we want to defend our society. Defence comes at a price. Is AI one we are going to have to pay?